id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,866,854,688
|
[ONNX] Add draft_export as a strategy
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: new features"
] | 14
|
COLLABORATOR
|
Create draft_export strategy.
The strategy is added before jit and after strict=True, as the third fallback. Since it is specializing tensors it should not be less robust than the jit trace strategy.
| true
|
2,866,841,603
|
Package API for torch.compile
|
zhxchen17
|
open
|
[
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 5
|
CONTRIBUTOR
|
Package API for torch.compile
Following up PR https://github.com/pytorch/pytorch/pull/145381, we implement
a new API for compiling models using the cpp wrapper, and save/load
compiled artifacts to disk.
Package is now designed to be a per-torch.compile() object living with
the compilation context. Each time a recompilation happens, it will collect
the compiled artifacts into a lookup table. When a new set of inputs is
passed to the compiled callable, before we enter the dynamo cache, we will
perform a lookup in compile package first, and match by the serialized guards.
API names are tentative but the workflow roughly looks like the following:
```
def f(...): ...
compiled_f = torch.compile(f, package="my_dir/my_model")
compiled_f(*args)
compiled_f.save_package(prefix="/dir1")
...
compiled_f.load_package(prefix="/dir2")
```
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,866,757,583
|
[ROCm] Input vectorization in elementwise kernels for tensors with heterogeneous types
|
carlobertolli
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: rocm",
"rocm",
"ciflow/rocm",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
This patch exemplifies its use for input tensors with types (float,bfloat16) when functor type is float(float,float).
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,866,682,656
|
[MPS] faster integer matmul for mps
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 4
|
COLLABORATOR
|
There is a naive matmul kernel written for MPS matmul which is used when input types are integer(and some other cases for older MacOSes). The old version of matmul is naive with global memory accesses which really tanks the performance especially when matrix is sufficiently large.
This PR optimizes it (even though there might be more optimizations with using simdgroup matrices which I'll cover in followup since writing that kernel will take more time)
## Performance comparison on M1 Pro:

You can get these numbers by running this script with old kernel compiled and then new kernel compiled(Make sure to change the csv where each output is written):
```python
import torch
import numpy as np
import time
import csv
matrix_sizes = [32, 128, 512, 1024, 2048, 4096]
num_runs = 10
warmup_runs = 3
def run_int_mm(A, B):
torch.mps.synchronize()
start = time.perf_counter()
c = A @ B
torch.mps.synchronize()
end = time.perf_counter()
return c, end - start
results = {
'N': [],
'mean_time': [],
'std_time': []
}
for n in matrix_sizes:
print(f"\nBenchmarking N={n}")
try:
A_mps = torch.randint(low=-100, high=100, size=(n, n), dtype=torch.int8, device="mps")
B_mps = torch.randint(low=-100, high=100, size=(n, n), dtype=torch.int8, device="mps")
for _ in range(warmup_runs):
_, _ = run_int_mm(A_mps, B_mps)
times = []
for _ in range(num_runs):
_, t = run_int_mm(A_mps, B_mps)
times.append(t)
mean_time = np.mean(times)
std_time = np.std(times)
results['N'].append(n)
results['mean_time'].append(mean_time)
results['std_time'].append(std_time)
print(f"Mean time: {mean_time:.4f}s ± {std_time:.4f}s")
except RuntimeError as e:
print(f"Error for N={n}: {e}")
continue
with open('int_mm_benchmark_times_old.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['N', 'mean_time', 'std_time'])
for i in range(len(results['N'])):
writer.writerow([
results['N'][i],
results['mean_time'][i],
results['std_time'][i]
])
```
| true
|
2,866,469,409
|
Raise KeyError when class attr is overwritten by a Module
|
vmoens
|
open
|
[
"Stale",
"release notes: nn"
] | 3
|
CONTRIBUTOR
|
Currently, this code doesn't raise an exception though the result of the `setattr` is erroneous
```python
import torch
from torch.nn import Module
from torch.nn.parameter import Parameter
class MyModule(Module):
class_attr = "some value"
c = MyModule()
print(c.class_attr) # prints "some value"
c.class_attr = MyModule()
print(c.class_attr) # prints "some value"
```
but this does
```python
import torch
from torch.nn import Module
from torch.nn.parameter import Parameter
class MyModule(Module):
class_attr = "some value"
c = MyModule()
# c.class_attr = "another value"
# print(c.class_attr)
print(c.class_attr)
c.class_attr = Parameter(torch.randn(3)) # breaks
```
In this PR, I propose a KeyError when `setattr` is called for a `Module` instance and a class attribute already exists.
Since it's a silent error, it could be that some users see this error appear but it would then mean that they're wrongfully assuming that the `setattr` worked when it didn't.
| true
|
2,866,403,489
|
Update pybind11 submodule to 3.0.0-dev test
|
Skylion007
|
open
|
[
"open source",
"Stale",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,866,352,594
|
[Inductor] optimize the heuristics of outer loop fusion
|
jiayisunx
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144020
* __->__ #147523
Summary:
Optimize the heuristics of outer loop fusion: When the range of the first inner loop is much larger than the range of all outer loops, do not fuse the outer loops and fallback to standard codegen.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,866,228,726
|
Fix the shape check inside gnll loss
|
KohakuBlueleaf
|
open
|
[
"module: nn",
"triaged",
"open source",
"topic: not user facing",
"bug"
] | 5
|
NONE
|
Fixes #147521
This modification allow user to put any size of var in GaussianNLLLoss if the var is broadcastable (to input/target's size)
Therefore, the demo code in #147521 will result in expected behaviour and correct output.
This allow all input size that match:
`input.size = (..., n, ...), var.size = (..., 1, ...)`
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,866,222,575
|
Unexpected incorrect size error in GaussianNLLLoss
|
KohakuBlueleaf
|
open
|
[
"module: nn",
"module: loss",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
When using the `nn.GaussianNLLLoss`, we allow `var` to have one dimension to be 1 and other size match. But based on the behavior and source code. We actually only allow the "final dimension" to be one.
```python
import torch
import torch.nn as nn
loss = nn.GaussianNLLLoss()
test_inp = torch.randn(4, 3, 32, 32)
test_target = torch.randn(4, 3, 32, 32)
test_var = torch.randn(4, 1, 32, 32)
loss(test_inp, test_target, test_var.abs())
```
Above demo code will result in following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\apoll\micromamba\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\apoll\micromamba\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\apoll\micromamba\Lib\site-packages\torch\nn\modules\loss.py", line 444, in forward
return F.gaussian_nll_loss(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\apoll\micromamba\Lib\site-packages\torch\nn\functional.py", line 3297, in gaussian_nll_loss
raise ValueError("var is of incorrect size")
ValueError: var is of incorrect size
```
Which is not expected.
### Versions
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 專業版 (10.0.26100 64 位元)
GCC version: (GCC) 13.2.0
Clang version: Could not collect
CMake version: version 3.30.3
Libc version: N/A
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:17:14) [MSC v.1941 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 571.96
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 13th Gen Intel(R) Core(TM) i9-13900K
Manufacturer: GenuineIntel
Family: 207
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3000
MaxClockSpeed: 3000
L2CacheSize: 32768
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] asyncclick==8.1.8.0
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] gpytorch==1.12
[pip3] lion-pytorch==0.2.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] onnxruntime-gpu==1.19.2
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.14.0
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-msssim==1.0.0
[pip3] torch==2.6.0+cu126
[pip3] torch-fidelity==0.3.0
[pip3] torchao==0.8.0
[pip3] torchaudio==2.6.0+cu126
[pip3] torchdata==0.8.0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==1.4.2
[pip3] torchsde==0.2.6
[pip3] torchtext==0.6.0
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,866,155,044
|
L-BFGS-B
|
MicheleBellomo
|
closed
|
[
"module: optimizer",
"triaged"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
Good morning everyone, I would like to know if there is any interest in implementing the L-BFGS-B algorithm in PyTorch. Currently, PyTorch provides an implementation of L-BFGS, but without support for box constraints.
There have been some independent attempts to develop L-BFGS-B, but in my experience, the available implementations have not worked properly. I believe that having a native implementation of L-BFGS-B in PyTorch could be highly beneficial to the community, expanding PyTorch’s applications beyond deep learning into more statistical domains.
For example, my request stems from the need to implement libraries for point processes and Hawkes processes, where PyTorch’s tensor computation enables significant parallelization and acceleration of maximum likelihood estimation. However, the absence of L-BFGS-B forces me to rely on SciPy’s implementation, requiring constant conversion between PyTorch tensors and NumPy arrays, preventing me from leveraging GPU acceleration.
Would there be any interest in working on this implementation?
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,865,947,296
|
[Inductor] Fix the decompositions of torch isin
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147519
**Summary**
Fixed two decomposition issues in `torch.isin`:
- Issue 1: As reported in [#147329](https://github.com/pytorch/pytorch/issues/147329), the current decomposition does not support cases where test_element is a scalar. This is now implemented by referring to the https://github.com/pytorch/pytorch/blob/ead970c8d035690c180641909b75da13fa16c76e/aten/src/ATen/native/TensorCompare.cpp#L1004-L1008
- Issue 2: Found while enabling a unit test with `elements = 1` and `test_elements = torch.tensor([1, 2, 3, 4])`, where Inductor produced different results compared to eager mode. This issue is fixed by referring to https://github.com/pytorch/pytorch/blob/ead970c8d035690c180641909b75da13fa16c76e/aten/src/ATen/native/cpu/TensorCompareKernel.cpp#L329-L338
**Test Plan**
```
python test/inductor/test_torchinductor.py -k test_isin_tensor_scalar
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,865,945,462
|
Enable FSDP tests on XPU device
|
zhangxiaoli73
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)"
] | 11
|
CONTRIBUTOR
|
**Motivation:**
Enable FSDP tests on XPU device
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @gujinghui @guangyey
| true
|
2,865,655,105
|
[dtensor][cp] experiment: register flex_attention to a custom fn on DTensor within a custom dispatch mode
|
XilunWu
|
open
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147603
* __->__ #147517
* #147516
* #147515
* #147514
* #145353
### Summary
Successfully dispatch `flex_attention` within a given context to the custom CP `flex_attention`.
```
Traceback (most recent call last):
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/common_distributed.py", line 726, in run_test
getattr(self, test_name)()
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/common_distributed.py", line 599, in wrapper
fn()
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 405, in wrapper
raise e
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 402, in wrapper
func(self, *args, **kwargs) # type: ignore[misc]
File "/data/users/xilunwu/oss/pytorch/test/distributed/tensor/test_attention.py", line 498, in test_ring_flex_attention
out_dt = flex_attention(q_dist, k_dist, v_dist, block_mask=block_mask)
File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/nn/attention/flex_attention.py", line 1357, in flex_attention
out, lse = torch.compile(
File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/nn/attention/flex_attention.py", line 1345, in _flex_attention_hop_wrapper
return flex_attention_hop(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 744, in flex_attention_autograd
out, logsumexp = FlexAttentionAutogradOp.apply(
File "/data/users/xilunwu/oss/pytorch/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 610, in forward
out, logsumexp = flex_attention(
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 462, in wrapper
return torch.overrides.handle_torch_function(
File "/data/users/xilunwu/oss/pytorch/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 363, in dispatch
result = handler(mode, *args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/test/distributed/tensor/test_attention.py", line 525, in cp_flex_attention
return flex_attention_hop(
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 396, in dispatch
result = handler(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/test/distributed/tensor/test_attention.py", line 525, in cp_flex_attention
return flex_attention_hop(
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 396, in dispatch
result = handler(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/test/distributed/tensor/test_attention.py", line 525, in cp_flex_attention
return flex_attention_hop(
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 396, in dispatch
result = handler(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/test/distributed/tensor/test_attention.py", line 525, in cp_flex_attention
return flex_attention_hop(
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 455, in dispatch
return kernel(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 266, in sdpa_dense
out, lse = math_attention(
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 221, in math_attention
G = query.size(1) // key.size(1)
AttributeError: 'function' object has no attribute 'size'
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,865,654,819
|
[dtensor][cp] experiment: register flex_attention to a custom fn within a custom dispatch mode
|
XilunWu
|
open
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147603
* #147517
* __->__ #147516
* #147515
* #147514
* #145353
### Summary
Attempt to dispatch `flex_attention` within a given context to the custom CP `flex_attention`. Got the below error:
```
Traceback (most recent call last):
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/common_distributed.py", line 726, in run_test
getattr(self, test_name)()
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/common_distributed.py", line 599, in wrapper
fn()
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 405, in wrapper
raise e
File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 402, in wrapper
func(self, *args, **kwargs) # type: ignore[misc]
File "/data/users/xilunwu/oss/pytorch/test/distributed/tensor/test_attention.py", line 498, in test_ring_flex_attention
out_dt = flex_attention(q_dist, k_dist, v_dist, block_mask=block_mask)
File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/nn/attention/flex_attention.py", line 1357, in flex_attention
out, lse = torch.compile(
File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/nn/attention/flex_attention.py", line 1345, in _flex_attention_hop_wrapper
return flex_attention_hop(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 327, in dispatch
return kernel(*args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 744, in flex_attention_autograd
out, logsumexp = FlexAttentionAutogradOp.apply(
File "/data/users/xilunwu/oss/pytorch/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 610, in forward
out, logsumexp = flex_attention(
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 462, in wrapper
return torch.overrides.handle_torch_function(
File "/data/users/xilunwu/oss/pytorch/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 363, in dispatch
result = handler(mode, *args, **kwargs)
File "/data/users/xilunwu/oss/pytorch/test/distributed/tensor/test_attention.py", line 524, in cp_flex_attention
return flex_attention_hop(
File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 398, in dispatch
raise NotImplementedError(
NotImplementedError: There was no rule registered for HOP flex_attention and subclass <class 'torch.distributed.tensor.DTensor'>. We recommend filing an issue.
```
| true
|
2,865,654,591
|
[dtensor][cp] experiment: register flex_attention to a custom fn on DTensor
|
XilunWu
|
open
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147603
* #147517
* #147516
* __->__ #147515
* #147514
* #145353
### Summary
Attempt to dispatch flex_attention on DTensor to a custom CP flex_attention function but got the error below. This error should be identical to #146994 .
```
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] Caught exception:
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] Traceback (most recent call last):
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/common_distributed.py", line 726, in run_test
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] getattr(self, test_name)()
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/common_distributed.py", line 599, in wrapper
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] fn()
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/common_utils.py", line 3155, in wrapper
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] method(*args, **kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 405, in wrapper
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] raise e
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/testing/_internal/distributed/_tensor/common_dtensor.py", line 402, in wrapper
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] func(self, *args, **kwargs) # type: ignore[misc]
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/test/distributed/tensor/test_attention.py", line 493, in test_ring_flex_attention
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] out_dt = flex_attention(q_dist, k_dist, v_dist, block_mask=block_mask)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/eval_frame.py", line 589, in _fn
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/output_graph.py", line 1509, in _call_user_compiler
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] raise BackendCompilerFailed(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/output_graph.py", line 1488, in _call_user_compiler
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] compiled_fn = compiler_fn(gm, self.example_inputs())
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] compiled_gm = compiler_fn(gm, example_inputs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/__init__.py", line 2339, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return compile_fx(model_, inputs_, config_patches=self.config)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_inductor/compile_fx.py", line 2168, in compile_fx
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return aot_autograd(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/backends/common.py", line 101, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_functorch/aot_autograd.py", line 1158, in aot_module_simplified
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] compiled_fn = AOTAutogradCache.load(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 779, in load
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] compiled_fn = dispatch_and_compile()
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_functorch/aot_autograd.py", line 1143, in dispatch_and_compile
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] compiled_fn, _ = create_aot_dispatcher_function(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return _create_aot_dispatcher_function(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_functorch/aot_autograd.py", line 671, in _create_aot_dispatcher_function
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] fw_metadata = run_functionalized_fw_and_collect_metadata(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] flat_f_outs = f(*flat_f_args)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 899, in functional_call
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] out = PropagateUnbackedSymInts(mod).run(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/fx/interpreter.py", line 171, in run
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] self.env[node] = self.run_node(node)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/fx/experimental/symbolic_shapes.py", line 7084, in run_node
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] result = super().run_node(n)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/fx/interpreter.py", line 236, in run_node
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return getattr(self, n.op)(n.target, args, kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/fx/interpreter.py", line 316, in call_function
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return target(*args, **kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return super().__call__(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return wrapper()
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return self.dispatch(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 327, in dispatch
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return kernel(*args, **kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 744, in flex_attention_autograd
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] out, logsumexp = FlexAttentionAutogradOp.apply(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/autograd/function.py", line 575, in apply
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return super().apply(*args, **kwargs) # type: ignore[misc]
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 610, in forward
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] out, logsumexp = flex_attention(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return super().__call__(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return wrapper()
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 462, in wrapper
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return torch.overrides.handle_torch_function(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/overrides.py", line 1721, in handle_torch_function
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] result = mode.__torch_function__(public_api, types, args, kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return func(*args, **(kwargs or {}))
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return super().__call__(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 471, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return wrapper()
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 467, in wrapper
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return self.dispatch(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 363, in dispatch
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] result = handler(mode, *args, **kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 179, in functionalize_dispatch_mode_fn
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return fn(PythonFunctionalizeAPI(mode), *args, **kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_higher_order_ops/flex_attention.py", line 415, in flex_attention_functionalize
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] [query_unwrapped.new_zeros(())]
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_compile.py", line 51, in inner
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return disable_fn(*args, **kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_dynamo/eval_frame.py", line 764, in _fn
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return fn(*args, **kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/distributed/tensor/_api.py", line 348, in __torch_dispatch__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return DTensor._op_dispatcher.dispatch(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/distributed/tensor/_dispatch.py", line 221, in dispatch
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] local_results = op_call(*local_tensor_args, **op_info.local_kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_ops.py", line 756, in __call__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] return self._op(*args, **kwargs)
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] File "/data/users/xilunwu/oss/pytorch/torch/_subclasses/functional_tensor.py", line 201, in __torch_dispatch__
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] raise RuntimeError(
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
E0220 00:44:53.839000 1006342 torch/testing/_internal/common_distributed.py:733] RuntimeError: Attempting to use FunctionalTensor on its own. Instead, please use it with a corresponding FunctionalTensorMode()
```
| true
|
2,865,654,394
|
[DTensor] add aten.as_strided.default op
|
XilunWu
|
open
|
[
"oncall: distributed",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147603
* #147517
* #147516
* #147515
* __->__ #147514
* #145353
###Summary
This PR solves the FakeTensor propagation error encountered in #145353 but actually this is not needed for CP flex_attention. #147517 shows a way to avoid flex_attention's shape propagation over DTensor.
| true
|
2,865,576,831
|
[RFC] Request for Feedback and Review on PRs Adding RISC-V and RVV Support
|
zhangfeiv0
|
open
|
[
"module: build",
"module: cpu",
"triaged",
"enhancement",
"module: risc-v"
] | 5
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
# Motivation
I am writing to provide an update and request feedback regarding the progress of two PRs that I have submitted, which aim to enhance PyTorch's support for RISC-V architecture and the RVV (RISC-V Vector Extension).
# Current Progress
As mentioned in #129553, we have completed the key areas of focus, as follows:
- Kernel Optimization
Enhance DepthwiseConvKernel to take advantage of RVV.
Please refer to #127867 for details.
- Vec Library support for RISC-V
Currently, PyTorch covers different SIMD extensions on various platforms, such as AVX2, NEON, SVE, and so on. Similarly, we have integrated support for RVV within the vec sublibrary. For compatibility considerations, we implemented vector operations in the vec sublibrary based on __riscv_v_min_vlen (128 bits) and a register combination approach (m2).
Please refer to #135570 for details.
- CI support for compiling and building the RISC-V architecture.
Due to the lack of cloud server and GitHub Runner support, we utilized an x86_64 architecture runner to cross-compile PyTorch for the RISC-V architecture, thereby accomplishing the compilation and build verification of the code.
Please refer to #143979 for details.
Please refer to #141550 for further discussion
In order to enable cross-compilation of PyTorch, support for cross-compiling the third-party library SLEEF has been merged upstream. Please refer to [add cross-compile for sleef](https://github.com/shibatch/sleef/pull/607) for details.
# Additional context
Our testing environment includes: qemu, docker, K230 board, and Banana Pi board.
Given that these PRs have been open for some time, I would kindly request your attention and any insights or comments you may have to help move these changes forward. If there are any aspects of the implementation that need clarification or modification, I am happy to discuss and make the necessary adjustments.
Thank you for your time and consideration!
cc @malfet @seemethere @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,865,534,762
|
Enable ASAN in CUDA tests
|
cyyever
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/slow"
] | 3
|
COLLABORATOR
|
It should work.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,865,504,347
|
Enable UBSAN test
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,865,428,403
|
`clamp_` and `clamp` behave differently on MPS device.
|
ornew
|
open
|
[
"triaged",
"module: correctness (silent)",
"module: mps"
] | 4
|
NONE
|
### 🐛 Describe the bug
On the MPS device, `clamp_` and `clamp` operations on tensors produce inconsistent results, unlike on the CPU device where they behave as expected. Specifically, `clamp_` appears to not correctly modify the tensor in-place on MPS, leading to unexpected values in the output tensor. This issue has been observed to affect bounding box transformations in torchvision v2.
**Discovery Context and Minimal Reproduction Refinement:**
This bug was discovered while investigating unexpected outputs from affine transformations of bounding boxes using torchvision transforms v2. During the investigation, it was found that the `clamp_bounding_boxes` function in torchvision, which is used during coordinate transformations, utilizes `clamp_`. This led to the suspicion that the discrepancy between `clamp_` and `clamp` on MPS might be the root cause of the issue with bounding box transformations. This issue also echoes a similar problem previously encountered in YOLO, related to coordinate clamping (see https://github.com/ultralytics/ultralytics/issues/5817 ).
The relevant code in torchvision that uses `clamp_` within `clamp_bounding_boxes` can be found here: [torchvision/transforms/v2/functional/_meta.py#L249-L250](https://github.com/pytorch/vision/blob/b5c7443ec28292627351dde53dcd2613fedf1cdb/torchvision/transforms/v2/functional/_meta.py#L249-L250) ).
To reproduce the core bug with `clamp_` and `clamp`, run the following code:
```python
import torch
print(torch.__version__)
# --- Reproduction with unsliced arange ---
print("--- Unsliced arange ---")
torch.set_default_device("cpu")
cpu_unsliced_clamp_in_place = torch.arange(10).clamp_(0, 1)
cpu_unsliced_clamp_out_place = torch.arange(10).clamp(0, 1)
print(f"CPU clamp_ result (unsliced): {cpu_unsliced_clamp_in_place}")
print(f"CPU clamp result (unsliced): {cpu_unsliced_clamp_out_place}")
torch.set_default_device("mps")
mps_unsliced_clamp_in_place = torch.arange(10).clamp_(0, 1)
mps_unsliced_clamp_out_place = torch.arange(10).clamp(0, 1)
print(f"MPS clamp_ result (unsliced): {mps_unsliced_clamp_in_place}")
print(f"MPS clamp result (unsliced): {mps_unsliced_clamp_out_place}")
# --- Reproduction with sliced arange ---
print("\n--- Sliced arange ---")
torch.set_default_device("cpu")
cpu_sliced_clamp_in_place, cpu_sliced_clamp_out_place = torch.arange(10)[::2].clamp_(0, 1), torch.arange(10)[::2].clamp(0, 1)
print(f"CPU clamp_ result: {cpu_sliced_clamp_in_place}")
print(f"CPU clamp result: {cpu_sliced_clamp_out_place}")
torch.set_default_device("mps")
mps_sliced_clamp_in_place, mps_sliced_clamp_out_place = torch.arange(10)[::2].clamp_(0, 1), torch.arange(10)[::2].clamp(0, 1)
print(f"MPS clamp_ result: {mps_sliced_clamp_in_place}")
print(f"MPS clamp result: {mps_sliced_clamp_out_place}")
```
**Observed results:**
```
2.6.0
--- Unsliced arange ---
CPU clamp_ result (unsliced): tensor([0, 1, 1, 1, 1, 1, 1, 1, 1, 1])
CPU clamp result (unsliced): tensor([0, 1, 1, 1, 1, 1, 1, 1, 1, 1])
MPS clamp_ result (unsliced): tensor([0, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='mps:0')
MPS clamp result (unsliced): tensor([0, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='mps:0')
--- Sliced arange ---
CPU clamp_ result: tensor([0, 1, 1, 1, 1])
CPU clamp result: tensor([0, 1, 1, 1, 1])
MPS clamp_ result: tensor([0, 1, 1, 6, 8], device='mps:0')
MPS clamp result: tensor([0, 1, 1, 1, 1], device='mps:0')
```
As you can see from the "Unsliced arange" results, when `clamp_` and `clamp` are applied to an **unsliced** `arange` tensor, both operations produce **correct and consistent results across both CPU and MPS devices**: the values are correctly clamped to the range [0, 1], resulting in `tensor([0, 1, 1, 1, 1, 1, 1, 1, 1, 1])`.
However, the "Sliced arange" results highlight the bug: when applied to a **sliced tensor**, `clamp_` produces **incorrect results *specifically on the MPS device***: `tensor([0, 1, 1, 6, 8], device='mps:0')`. In contrast, `clamp` correctly clamps the sliced tensor on MPS, producing the expected `tensor([0, 1, 1, 1, 1], device='mps:0')`, and both `clamp_` and `clamp` behave correctly for sliced tensors on the CPU.
This inconsistency demonstrates that `clamp_` has a bug on MPS **when operating on sliced tensors**, while `clamp` and `clamp_` on CPU, and `clamp` on MPS, all function as expected.
**Expected results:**
Both `clamp_` and `clamp` should produce the same output on both CPU and MPS devices, correctly clamping the tensor values to the range `[0, 1]`, regardless of whether the tensor is sliced or not. Specifically, `clamp_` should modify the tensor in-place to `tensor([0, 1, 1, 1, 1])` for sliced tensor and correctly `clamp` (or not `clamp` if already within range) for unsliced tensor on MPS, just like it does on CPU and like `clamp` does on MPS.
### Versions
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.7 (main, Oct 16 2024, 07:12:08) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-14.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] onnxruntime==1.20.1
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[conda] numpy 1.26.4 py312h7f4fdc5_0
[conda] numpy-base 1.26.4 py312he047099_0
[conda] numpydoc 1.7.0 py312hca03da5_0
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,865,425,111
|
Open pynvml related test in test_cuda.py
|
cdzhan
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
As stated in the title.
| true
|
2,865,412,743
|
Precision drop after exporting PyTorch.sigmoid to ONNX.sigmoid
|
blingbling22
|
closed
|
[
"module: onnx",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
I have noticed a significant precision drop when exporting a PyTorch.sigmoid model to the ONNX format. The model performs well in PyTorch, but after converting it to ONNX and running inference, the output accuracy or results are not the same.
result of sigmoid([[1.0, -17.0, -20.0]]))
onnx output: [[7.310586e-01 8.940697e-08 0.000000e+00]]
pytorch output: tensor([[7.3106e-01, 4.1399e-08, 2.0612e-09]])
input should be small, as x < -16
code:
import torch
import torch.onnx as onnx
import onnx
import onnxruntime as ort
import numpy as np
class SigmoidModel(torch.nn.Module):
def __init__(self):
super(SigmoidModel, self).__init__()
def forward(self, x):
return torch.sigmoid(x)
model = SigmoidModel()
model.eval()
x = torch.tensor([[1.0, -17.0, -20.0]])
output_pytorch = model(x)
onnx_path = "sigmoid_model.onnx"
torch.onnx.export(model, x, onnx_path, input_names=["input"], output_names=["output"],opset_version=11)
print(f" export onnx to: {onnx_path}")
onnx_model = onnx.load(onnx_path)
input = x.numpy()
output = ort.InferenceSession(onnx_model.SerializeToString()).run(None, {"input": input})
print("onnx output:",output[0])
print("pytorch output:",output_pytorch)
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 12.0.1 (llvm-project 44128cf700e27708c116f6fc8c1b4caa5a60ae2c)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i5-13500H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
BogoMIPS: 6374.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 10 MiB (8 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
| true
|
2,865,324,971
|
Document poison fork note for accelerator APIs
|
guangyey
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (rpc)",
"topic: not user facing",
"ciflow/xpu",
"module: accelerator"
] | 14
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149924
* __->__ #147507
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @albanD @EikanWang
| true
|
2,865,318,220
|
Unintuitive behavior and errors in F.pad
|
FabianIsensee
|
open
|
[
"module: nn",
"triaged",
"module: padding"
] | 0
|
NONE
|
### 🐛 Describe the bug
When padding arrays with F.pad I consistently run into errors that seem unintuitive. F.pad seems to expect batch and channel dimensions even though this is not documented:
https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html
```python
import torch
import torch.nn.functional as F
F.pad(torch.zeros((32, 32)), [1, 2, 3, 4], mode='reflect')
```
> NotImplementedError: Only 2D, 3D, 4D, 5D padding with non-constant padding are supported for now
The input clearly is a 2D tensor and requires 2D padding. I would expect F.pad to handle this internally instead of confronting the user with this.
A workaround is to do
`F.pad(torch.zeros((32, 32))[None, None], [1, 2, 3, 4], mode='reflect')[0, 0]`
but that quickly becomes annoying when dealing with different input dimensions, color channels sometimes being present, sometimes not.
Thanks a lot for looking into this!
Best,
Fabian
PS I am aware of [this](https://github.com/pytorch/pytorch/issues/74310) and [this](https://github.com/pytorch/pytorch/issues/72521) but there seems to be no solution so far
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 5800X3D 8-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 4548,8281
CPU min MHz: 2200,0000
BogoMIPS: 6799.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 96 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] fft-conv-pytorch==1.2.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.8.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] fft-conv-pytorch 1.2.0 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] numpydoc 1.8.0 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,865,116,576
|
[export] Enforce ordering for graph inputs of ExportedProgram
|
yiming0416
|
closed
|
[
"fb-exported",
"ciflow/inductor",
"release notes: export"
] | 14
|
CONTRIBUTOR
|
Summary:
As title. The graph inputs of have the following order:
token -> parameter -> buffer (persistent) -> buffer (non-persistent) -> tensor_constant -> custom_obj -> user_inputs
Verifier is also updated to check this order.
Test Plan: buck2 run @mode/dev-nosan caffe2/test:test_export -- -r test_enforcing_placeholder_order
Differential Revision: D69858068
| true
|
2,865,076,554
|
[Inductor][ROCm][CK] Unhardedcoded kernel shapes for ck_conv_template codegen
|
AviralGoelAMD
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 7
|
CONTRIBUTOR
|
## [Inductor][ROCm][CK] Parameterize `ck_conv_template` Codegen
### Description
Previously, ROCm CK kernel codegen templates were hardcoded with fixed values for convolution parameters:
- `index_t GroupCount`
- `index_t NBatch`
- `index_t NOutChannels`
- `index_t NInChannels`
- `vector<index_t> FilterSize`
- `vector<index_t> InputSize`
- `vector<index_t> ConvolutionStrides`
- `vector<index_t> Dilations`
- `vector<index_t> LeftPads`
- `vector<index_t> RightPads`
This PR updates `ck_conv_template` to accept these parameters dynamically from Inductor. By doing so, we reduce the number of generated templates, improving flexibility and maintainability.
### Testing
- Verified correctness by running relevant test cases, i.e `test/inductor/test_ck_backend.py`
- Ensured generated kernels reflect the updated parameterization, i.e generated templates in `/tmp/torchinductor_root/`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,865,051,770
|
[Inductor][ROCm][CK] Parameterize ck_conv_template Codegen
|
AviralGoelAMD
|
closed
|
[
"module: rocm",
"topic: not user facing",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
### Description
Previously, ROCm CK kernel codegen templates were hardcoded with fixed values for convolution parameters:
- index_t GroupCount
- index_t NBatch
- index_t NOutChannels
- index_t NInChannels
- vector<index_t> FilterSize
- vector<index_t> InputSize
- vector<index_t> ConvolutionStrides
- vector<index_t> Dilations
- vector<index_t> LeftPads
- vector<index_t> RightPads
This PR updates `ck_conv_template `to accept these parameters dynamically from Inductor. By doing so, we reduce the number of generated templates, improving flexibility and maintainability.
### Testing
Verified correctness by running relevant test cases, i.e `test/inductor/test_ck_backend.py`
Ensured generated kernels reflect the updated parameterization, i.e generated templates in `/tmp/torchinductor_root/`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,865,003,743
|
Slow first time torch.distributed op with ray
|
grimoire
|
open
|
[
"oncall: distributed",
"triaged",
"module: c10d"
] | 0
|
NONE
|
### 🐛 Describe the bug
I am working on deploying a distributed model with [ray](https://github.com/ray-project/ray). Everything is good before I change `CUDA_VISIBLE_DEVICES`. First time `all_reduce` take longer time (30s+).
### Reproduction script
`ray_dist.py`
```python
import os
import time
import ray
import torch
import torch.distributed as dist
from contextlib import contextmanager
@ray.remote(num_gpus=1)
class DistActor:
def __init__(self, rank, world_size):
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend='nccl', rank=rank, world_size=world_size)
def all_reduce(self):
a = torch.rand([1], device='cuda')
dist.all_reduce(a)
@contextmanager
def timeit(msg):
print(msg)
start = time.time()
yield
end = time.time()
duration = (end - start)
print(f'Take time: {duration:.3f} s')
if __name__ == '__main__':
ray.init()
world_size = 2
actors = [DistActor.remote(rank, world_size) for rank in range(world_size)]
with timeit('start first all_reduce'):
ray.get([actor.all_reduce.remote() for actor in actors])
with timeit('start second all_reduce'):
ray.get([actor.all_reduce.remote() for rank, actor in enumerate(actors)])
```
good without `CUDA_VISIBLE_DEVICES` or with `CUDA_VISIBLE_DEVICES=0,1` or with `CUDA_VISIBLE_DEVICES=1,0`
```bash
python ray/ray_dist.py
# start first all_reduce
# Take time: 4.486 s
# start second all_reduce
# Take time: 0.002 s
```
bad with `CUDA_VISIBLE_DEVICES=6,7`
```bash
CUDA_VISIBLE_DEVICES=6,7 python ray/ray_dist.py
# start first all_reduce
# Take time: 63.014 s
# start second all_reduce
# Take time: 0.002 s
```
good with `docker run -it --gpus '"device=6,7"' ..`
```bash
python ray/ray_dist.py
# start first all_reduce
# Take time: 3.183 s
# start second all_reduce
# Take time: 0.001 s
```
Multiprocessing implementation does not show the same behaivour. I am not sure this problem is caused by ray or pytorch so I post issue on both.
Do you have any idea?
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.17
Python version: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
CPU MHz: 2250.000
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4500.23
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
NUMA node4 CPU(s): 64-79
NUMA node5 CPU(s): 80-95
NUMA node6 CPU(s): 96-111
NUMA node7 CPU(s): 112-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu11==8.7.0.84
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu11==2.19.3
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.99
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] nvtx==0.2.10
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchdata==0.8.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.18.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[pip3] tritonclient==2.33.0
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] nccl 2.19.3.1 h6103f9b_1 conda-forge
[conda] numpy 1.24.3 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu11 8.7.0.84 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.19.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.99 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] pytorch-triton 2.1.0+e6216047b8 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchdata 0.8.0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchtext 0.18.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
[conda] tritonclient 2.33.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,864,996,744
|
removed zero dim cpu logic from fake_tensor.py
|
zero000064
|
open
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"open source",
"module: amp (automated mixed precision)",
"NNC",
"release notes: quantization",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)",
"module: compiled autograd"
] | 29
|
CONTRIBUTOR
|
Fixes #144748
In #144748, the inconsistency between the eager mode and the inductor mode is reported as a bug.
The root cause is fake_tenosr.py's find-common-device method, https://github.com/pytorch/pytorch/blob/0b0da81021e061c021e515bc35d7dc0dbbb05941/torch/_subclasses/fake_tensor.py#L833, takes zero dim cpu tensor into account but the device check in adaption.h doesn't.
In the eager mode, device check is only performed for a subset of operations, https://github.com/pytorch/pytorch/blob/f66229de2b4ec32ad26ecfbb9905457af5fd0541/torchgen/dest/register_dispatch_key.py#L281, and no zero dim cpu logic is used, https://github.com/pytorch/pytorch/blob/f66229de2b4ec32ad26ecfbb9905457af5fd0541/aten/src/ATen/core/op_registration/adaption.h#L47.
Here are generated wrapper_CUDA_add_tensor and wrapper_CUDA_nextafter, no device check is found in the add mehtod.
```
at::Tensor wrapper_CUDA_add_Tensor(const at::Tensor & self, const at::Tensor & other, const at::Scalar & alpha) {
// No device check
structured_ufunc_add_CUDA_functional op;
op.meta(self, other, alpha);
op.impl(self, other, alpha, op.outputs_[0]);
return std::move(op.outputs_[0]);
}
at::Tensor wrapper_CUDA_nextafter(const at::Tensor & self, const at::Tensor & other) {
std::optional<Device> common_device = std::nullopt;
(void)common_device; // Suppress unused variable warning
c10::impl::check_and_update_common_device(common_device, self, "wrapper_CUDA_nextafter", "self");
c10::impl::check_and_update_common_device(common_device, other, "wrapper_CUDA_nextafter", "other");
structured_nextafter_out_functional op;
op.meta(self, other);
op.impl(self, other, op.outputs_[0]);
return std::move(op.outputs_[0]);
}
```
So removed zero dim cpu logic from fake_tensor.py.
cc @eellison @zou3519
| true
|
2,864,995,945
|
Support serialization for uintx/intx in weights_only
|
jerryzh168
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147500
Summary:
Fixing the issue reported by huggingface
Test Plan:
python test/test_serialization.py -k test_serialization_uintx_intx
Reviewers:
Subscribers:
Tasks:
Tags:
| true
|
2,864,984,457
|
Optimize `dynamo` typing
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 7
|
CONTRIBUTOR
|
Optimize dynamo methods type annotation.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,864,948,051
|
Upgrade submodule oneDNN to v3.7
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"Merged",
"Reverted",
"Stale",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ci-no-td",
"ciflow/linux-aarch64"
] | 23
|
COLLABORATOR
|
This PR is to upgrade submodule oneDNN to v3.7.
## Improvements
- Improved performance of convolution and matmul primitives on Intel Xeon processors with Intel AMX instruction set support (formerly Sapphire Rapids and Granite Rapids).
- Improved performance of int8 and fp32 forward convolution primitive on processors with Intel AVX2 instruction set support.
- Improved performance of fp8 matmul primitives with bf16 and fp16 bias data type on Intel Xeon processors with Intel AMX instruction set support (formerly Sapphire Rapids and Granite Rapids).
- Introduced initial optimizations for Intel GPUs based on Xe3 architecture.
- Added bfloat16 support for SDPA, implemented fp16 and bf16 gemm kernel in SDPA.
- Fixed f16 matmul accuracy, the issue of SDPA cannot dispatched to ukernel, bf16/fp16/fp32 conv performance, INT8 Kernel trigger page fault, deconvolution precision issue on complex128 and fp64 and gemm correctness issue in float16 issues.
- Improved bf16 matmul performance with fp32 destination with Arm Compute Library (ACL).
- Improved bf16 to fp32 reorder performance.
- Improved bf16 reorder performance.
- Improved bf16 convolution with ACL.
Fixes https://github.com/pytorch/pytorch/issues/136348.
## Validation results on CPU
1. NLP models accuracy/inference/training


2. Torchbench cpu userbenchmark inference & training

3. Inductor quantization

4. Dynamo benchmarks








## Validation results on XPU
Accuracy is same as baseline. Performance is shown below.

## Validation results on ARM


cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @aditew01 @nikhil-arm @fadara01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @yf225
| true
|
2,864,936,952
|
[PrivateUse1] Improve error message after we deprecated the `REGISTER_GENERATOR_PRIVATEUSE1` micro
|
shink
|
closed
|
[
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc: @albanD @FFFrog
| true
|
2,864,849,943
|
[codemod] Fix unused-value issue in caffe2/aten/src/ATen/native/miopen/Conv_miopen.cpp +1
|
r-barnes
|
closed
|
[
"module: cpp",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: improvements",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary:
LLVM has a warning `-Wunused-value` which we treat as an error because it's so often diagnostic of a code issue. Unused values often indicate a programming mistake, but can also just be unnecessary cruft that harms readability and performance.
For questions/comments, contact r-barnes.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
Differential Revision: D69755123
cc @jbschlosser
| true
|
2,864,814,515
|
Native Node Sleep/Wake Functionality for Neural Network Layers
|
MikeyBeez
|
open
|
[
"module: nn",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
# Feature Request: Native Node Sleep/Wake Functionality for Neural Network Layers
## Overview
This feature request proposes adding native functionality to temporarily deactivate ("sleep") and reactivate ("wake") specific nodes and weights within neural network layers. This capability would enable important research directions and training optimizations that are currently difficult to implement cleanly.
## Motivation
Current approaches to modifying network architecture during training (such as pruning or freezing) are relatively crude - we either permanently remove connections or freeze entire layers. A more granular and reversible approach would enable new research directions and potentially more efficient training methods.
## Key Benefits
### 1. Preservation of Potentially Important Pathways
- Instead of permanent pruning of low-weight connections, nodes could be temporarily deactivated
- Allows for reactivation when exploring new levels of abstraction or capabilities
- Particularly important for continuous learning systems where initially "unimportant" connections might become crucial later
- Enables empirical testing of theories about the role of weak connections in network development
### 2. Training with Selective Weight Freezing
- Freeze specific pathways while training others
- Allow new capacity to develop without disrupting existing knowledge
- Test approaches to preventing catastrophic forgetting
- Study how networks develop when different components are frozen/active at different times
- Enable more sophisticated approaches to transfer learning
### 3. Dynamic Architecture Optimization
- More flexible than current pruning approaches
- Enables experimentation with dynamic network growth and pruning
- Allows for temporary deactivation of pathways to study network behavior
- Support for adaptive architecture during training
### 4. Research Applications
- Study emergence of hierarchical representations
- Investigate network redundancy and pathway importance
- Examine how different parts of networks contribute to various abstraction levels
- Test hypotheses about neural network development inspired by biological systems
- Explore new approaches to architecture search
### 5. Training Optimization
- Selective activation/deactivation during different training phases
- Resource optimization without permanent architecture changes
- More granular control over network capacity
- Potential for more efficient training regimes
## Current Limitations
The current approaches (using masks or requires_grad flags) are hacky and don't provide clean, efficient implementation of this functionality. These workarounds:
- Are often computationally inefficient
- Don't cleanly integrate with optimizers
- Can be error-prone
- Make experiments harder to implement and reproduce
- Don't properly handle all edge cases
## Proposed API
```python
# Layer-level functionality
class SleepableLayer(nn.Module):
def sleep_nodes(self, indices):
"""Deactivate specific nodes"""
pass
def wake_nodes(self, indices):
"""Reactivate specific nodes"""
pass
def is_sleeping(self, indices):
"""Check sleep status of nodes"""
pass
def sleep_weights(self, indices):
"""Deactivate specific weights"""
pass
def wake_weights(self, indices):
"""Reactivate specific weights"""
pass
def get_sleep_state(self):
"""Return current sleep/wake configuration"""
pass
# Model-level convenience functions
model.sleep_nodes(layer_name, indices)
model.wake_nodes(layer_name, indices)
model.get_sleep_configuration()
```
## Implementation Considerations
### Core Requirements
1. Efficient switching between active/inactive states
2. Proper gradient handling during backpropagation
3. Seamless integration with existing optimizers
4. Support for both node-level and weight-level control
5. Options for different levels of deactivation (full sleep vs weight freezing)
6. State preservation during save/load operations
7. Proper handling of batch normalization and other layer types
8. Clear documentation of behavior with different optimizer types
### Performance Considerations
- Minimal memory overhead for sleep state tracking
- Efficient computation path for inactive nodes
- Batch operation support for sleep/wake operations
- Proper GPU memory management
### Safety Features
- Validation of sleep/wake operations
- Warning for potentially problematic configurations
- State consistency checks
- Clear error messages for invalid operations
## Benefits to the PyTorch Community
This functionality would:
1. Enable new research directions in network architecture and training
2. Make experiments more reproducible through standardized implementation
3. Reduce code complexity for many common training scenarios
4. Support innovation in network architecture research
5. Provide tools for studying network behavior and development
## Submission Information
### Primary Channels
1. GitHub Issue:
Create new issue at https://github.com/pytorch/pytorch/issues
Use label: "feature request"
2. PyTorch Discussion Forums:
Post in "Feature Requests & Ideas" category at https://discuss.pytorch.org/
### Additional Contacts
- PyTorch Developer Relations: dev-support@pytorch.org
- PyTorch Core Team (through GitHub)
## Additional Resources
- Relevant research papers on network pruning and architecture
- Examples of current workarounds and their limitations
- Use cases from the research community
- Related issues and discussions in the PyTorch repository
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,864,777,665
|
[dynamo] more better error messages [3/N]
|
williamwen42
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compile ux"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147912
* #147872
* __->__ #147494
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,864,742,886
|
[CUDA] Replace deprecated usages of cub iterators and thread operators
|
xwang233
|
open
|
[
"module: cuda",
"triaged",
"open source",
"topic: not user facing"
] | 8
|
COLLABORATOR
|
Several cub iterators have been deprecated and removed in the latest CCCL (cub) development https://github.com/NVIDIA/cccl/pull/3831. This PR replaced the usage of those cub iterators with thrust iterators.
Some cub thread operators were also deprecated and removed in https://github.com/NVIDIA/cccl/pull/3918. This PR replaced those operators with libcudacxx ops.
This might also affect ROCM usability a bit.
This patch is tested to work with CCCL commit at https://github.com/NVIDIA/cccl/commit/82befb089420cc5e0f5bb08a083c0b14c8984af6
Tracking of CCCL/CUB deprecations in the most recent development https://github.com/NVIDIA/cccl/issues/101
internal ref: 5118165
cc @ptrblck @msaroufim @eqy
| true
|
2,864,742,437
|
Fix RuntimeError: value cannot be converted to type int64_t without overflow
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147492
The exact call is coming from here:
https://github.com/pytorch/pytorch/blob/78a94c911435bf9b1bb45888033a29081e406ec2/torch/_inductor/memory.py#L161
I have no idea why this error is being thrown and what mode/modes might be failing for this
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,864,739,495
|
Fix pxtas warnings on sm_120
|
sclarkson
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Per the feature specification table on the CUDA C programming guide, sm_120 also has maximum of 1536 threads per SM. This silences many ptxas warnings during build.
See Table 23 here: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#features-and-technical-specifications-technical-specifications-per-compute-capability
| true
|
2,864,737,450
|
Documentation: fix RNN example for multiple layers
|
jibril-b-coulibaly
|
open
|
[
"triaged",
"open source",
"Stale"
] | 4
|
NONE
|
Hi,
The documentation on RNN provides the following snippet of code that I think is incorrect.
```
# Efficient implementation equivalent to the following with bidirectional=False
def forward(x, hx=None):
if batch_first:
x = x.transpose(0, 1)
seq_len, batch_size, _ = x.size()
if hx is None:
hx = torch.zeros(num_layers, batch_size, hidden_size)
h_t_minus_1 = hx
h_t = hx
output = []
for t in range(seq_len):
for layer in range(num_layers):
h_t[layer] = torch.tanh(
x[t] @ weight_ih[layer].T
+ bias_ih[layer]
+ h_t_minus_1[layer] @ weight_hh[layer].T
+ bias_hh[layer]
)
output.append(h_t[-1])
h_t_minus_1 = h_t
output = torch.stack(output)
if batch_first:
output = output.transpose(0, 1)
return output, h_t
```
Having a number of layers `num_layers > 1` makes a stacked RNN, where the input to layer `L>1` is the state `h_L-1` at layer `L-1`.
The doc supports that by mentioning the size of the weight matrix must be:
```
weight_ih_l[k]: the learnable input-hidden weights of the k-th layer,
of shape `(hidden_size, input_size)` for `k = 0`. Otherwise, the shape is
`(hidden_size, num_directions * hidden_size)`
```
However, in the snippet above, the line
https://github.com/pytorch/pytorch/blob/fb55bac3de7f0afb45969ad8adbc340de48747ac/torch/nn/modules/rnn.py#L499
multiplies the input `x[t]` with the weight matrix `weight_ih[layer]`, whose dimensions mismatch.
I don't think the snippet is functional, this PR fixes this.
| true
|
2,864,658,922
|
type `fully_shard` so that the return value can be chained with typing enabled
|
xunnanxu
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147489
* #147488
This allows for
```
fsdped = fully_shard(model)
fsdped.set_xyz()
```
same applies if `model` is actually a list of modules
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [D69888119](https://our.internmc.facebook.com/intern/diff/D69888119)
| true
|
2,864,658,844
|
capture the return value in the contract typing
|
xunnanxu
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (composable)"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147489
* __->__ #147488
----
* the existing typing makes the return type `Optional[nn.Module]`
* this doesn't seem to be what the decorator actually does as it does
not alter the original return type
* This PR aims to fix the typing
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [D69888120](https://our.internmc.facebook.com/intern/diff/D69888120)
| true
|
2,864,603,987
|
[CI] Build sm89 with more procs experiment
|
clee2000
|
closed
|
[
"Merged",
"release notes: releng"
] | 7
|
CONTRIBUTOR
|
Add a build that uses 4 out of the 8 processes available on a linux.2xlarge/c5.2xlarge. Currently it's set to 2 because it would oom, but I'm curious as to how often people's builds oom. I can't test this on my own because of caching, so it has to run on pull request
This might result in a failing job on may people's PRs and I'm not sure how to get around it. I named it stable to make it automatically get sorted into the stable group for Dr. CI but it'll still show up
| true
|
2,864,590,596
|
remote_cache.py loggings cause a logging error after pytest exit
|
henrylhtsang
|
open
|
[
"module: logging",
"triaged",
"module: testing",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Basically, it seems like pytest cannot expect loggings after it closes, but remote_cache prints is using decorator `atexit.register`
repro (test expects A100, but you can comment that out):
```
TORCH_LOGS="inductor" pytest test/inductor/test_cutlass_backend.py -k test_get_max_alignment
```
error:
```
--- Logging error ---
Traceback (most recent call last):
File "/home/henrylhtsang/.conda/envs/pytorch-3.12/lib/python3.12/logging/__init__.py", line 1163, in emit
stream.write(msg + self.terminator)
ValueError: I/O operation on closed file.
Call stack:
File "/home/henrylhtsang/pytorch/torch/_inductor/remote_cache.py", line 417, in dump_cache_stats
log.info("Cache Metrics")
Message: 'Cache Metrics'
Arguments: ()
```
### Versions
trunk
cc @chauhang @penguinwu
| true
|
2,864,568,050
|
[cutlass backend] Add main tests for mm, addmm, bmm
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147485
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,864,551,068
|
[CI] Do not overwrite return code of test file when fails for rerun disabled tests
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Do not overwrite the return code of a single file when it fails. This will allow the log to be printed to stdout and the gha logs
| true
|
2,864,536,391
|
Can we have Dim.AUTO/Dim.DYNAMIC with an optional min & max?
|
ColinPeppler
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
**What?**
- `Dim.AUTO` is the dynamic shapes feature that automatically handles dynamic dim relations and can specialize.
- `Dim.DYNAMIC` is the same as `Dim.AUTO` but can't specialize IIUC.
- The `Dim` API allows you to specify symbol name, min and max.
- Can we add this min & max support to `Dim.AUTO/DYNAMIC`?
**Potential Benefits**
- As an Export user, I want the benefits of both (1) automatic handling of dynamic dim relations and (2) specify min/max metadata to take advantage of Inductor optimizations
- If we use the default max (i.e. `int_oo`), I think it's possible we could be missing out on Inductor optimizations.
- That being said, I haven't seen many optimizations that relies on the upperbound yet.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,864,520,089
|
[experimental] delayed compile
|
bobrenjc93
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147482
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,864,519,390
|
[dynamo][guard] Guard on the cuda device index
|
anijain2305
|
open
|
[
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147481
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,864,490,079
|
[PT2]: allow empty dict to pass type check (#147167)
|
kqfu
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 7
|
CONTRIBUTOR
|
Summary:
Seeing errors like when testing sigmoid for inline_cvr and perevent_cvr models.
```
terminate called after throwing an instance of 'c10::Error'
what(): forward() Expected a value of type 'Dict[int, Tuple[Tensor, Tensor, Tensor]]' for argument 'event_based_features' but instead found type 'Dict[Any, Any]'.
```
Let empty dict pass type check.
please, do NOT use any of the following flags, those are result of manual interventions in other parts of the system, misuse of them can be very painful for both detect and recover:
Test Plan:
```
MODEL_ENTITY_ID=691508446
SNAPSHOT_ID=0
OTHER_MODEL_ENTITY_ID=649645886
OTHER_SNAPSHOT_ID=0
MODULE=local
buck2 run mode/opt caffe2/torch/fb/model_transform/fx2trt/packaging:load_net_predictor -- \
--loadMode=BenchmarkAB \
--inputNetFile=/data/users/${USER}/models/${MODEL_ENTITY_ID}/${SNAPSHOT_ID}/${MODEL_ENTITY_ID}_${SNAPSHOT_ID}${suffix} \
--otherNetFile=/data/users/${USER}/models/${OTHER_MODEL_ENTITY_ID}/${OTHER_SNAPSHOT_ID}/${OTHER_MODEL_ENTITY_ID}_${OTHER_SNAPSHOT_ID}${suffix} \
--moduleName=${module} \
--submodToDevice "" \
--benchmarkDontRebatchSamples=true \
--sampleInputFilePath=/data/users/${USER}/models/${MODEL_ENTITY_ID}/${SNAPSHOT_ID}/archive_.predictor.disagg.gpu.local/data/sample_inputs/local.pt
```
Reviewed By: yjhao
Differential Revision: D69871393
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,864,435,802
|
[RFC] Deprecate silent fallback to aten logic in Inductor
|
henrylhtsang
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 5
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
The proposal is suggested by @eellison. See more context in https://github.com/pytorch/pytorch/pull/147148
**tldr**
This proposal isn’t going to affect the majority of users. You only need to worry about it if **both** of the following are true:
You set your custom max_autotune_gemm_backends
You do not have Aten in max_autotune_gemm_backends
# Problem
Currently, when max_autotune_gemm is true, GEMM kernels are generated using backends specified in max_autotune_gemm_backends. If these backends fail to produce a valid kernel, Inductor silently falls back to ATen, even when ATen is not included in max_autotune_gemm_backends. This silent fallback behavior is what we want to remove.
Additionally, there is autotune_fallback_to_aten, which attempts to control this silent fallback behavior. However, the correct approach should be to respect the user's choice of backends specified in max_autotune_gemm_backends.
Expected behavior
In the expected behavior, we respect the user's choice of max_autotune_gemm_backends. If a user intentionally excludes ATen from max_autotune_gemm_backends and the specified backends fail to generate a valid kernel, an error will be raised.
# Proposal
We want to deprecate it in 3 steps.
## Step 1: Gate the behavior with autotune_fallback_to_aten
Setup autotune_fallback_to_aten to control the silent fallback behavior for the remaining ops (addmm, bmm, mixed mm, int mm, etc)
Remove excess fallback logic that are not gated by autotune_fallback_to_aten, for example, [link](https://github.com/pytorch/pytorch/pull/147148#discussion_r1956643273).
Add env variable to control autotune_fallback_to_aten as a kill switch
In this step, we don’t expect any change in behavior.
## Step 2: turn off autotune_fallback_to_aten
This should be a one line change to change the behavior.
## Step 3: cleanup
We would clean up the logic after 3 weeks - one month, assuming nothing breaks.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,864,383,871
|
Blackwell Only Bugs
|
drisspg
|
open
|
[
"module: cuda",
"module: tests",
"triaged",
"oncall: pt2",
"module: inductor",
"oncall: export",
"Blackwell"
] | 1
|
CONTRIBUTOR
|
# Blackwell specific failures
1. Out-of-range shared or local address on blackwell for second MM in flex-attention | full log: https://www.internalfb.com/intern/paste/P1749623099/ + repro: https://gist.github.com/drisspg/807e3e4bfbfbc76f85bde154e00850c6
3. https://github.com/pytorch/ao/issues/1799
cc @ptrblck @msaroufim @eqy @mruberry @ZainRizvi @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi
| true
|
2,864,345,798
|
[Cutlass] Add test verifying number of precompiles
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
As title
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,864,289,782
|
Remove mention of magma-cuda in readme.md, refactor magma_conda install
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Related to: https://github.com/pytorch/pytorch/issues/138506 we migrated magma-cuda build from anaconda to aws
Last version of magma-cuda published was 12.6 https://anaconda.org/pytorch/magma-cuda126
Here is the PR that moved from anaconda to tarball: https://github.com/pytorch/pytorch/pull/140417
cc @malfet @afrittoli @seemethere @albanD
| true
|
2,864,284,422
|
[Export AOTI] dynamic_shapes export and compile degraded output
|
bhack
|
open
|
[
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor"
] | 22
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I cannot share the code as it is a whole model export and compile.
With exactly the same code exporting W and H at fixed resolution is going to give the correct output.
When instead I am going to use dynamic W and H in a given range
```python
h_dim = torch.export.Dim("H", min=min_h, max=max_h)
w_dim = torch.export.Dim("W", min=min_w, max=max_w)
constraints = {"x": {3: 14 * h_dim, 4: 14 * w_dim}}
```
and these constrains as `dynamic_shapes` for `export` the output is degraded.
Same inference code and exactly the same export code only this chage.
The only way to have a correct output with `Dim` and `dynamic_shapes` is using `export` example input `args` `kwars` with a W and H aligned the inference resolution.
### Versions
nightly
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi
| true
|
2,864,268,296
|
[cutlass backend] enable mixed mm test (cutlass2x) for H100
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147474
I am okay with not landing this as well. The motivation is to make developing on H100 smoother.
The reason the current test works on A100 but not H100 is because of alignment issue. Which was caused by arch specific filtering logic.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,864,258,678
|
[ROCm] Update inductor-periodic.yml to use the correct label
|
amdfaa
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/inductor-periodic"
] | 3
|
CONTRIBUTOR
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,864,241,709
|
[ONNX] Implement sym_not
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147472
* #147469
Fix https://github.com/pytorch/pytorch/issues/136572
| true
|
2,864,237,527
|
Add type hints to cuda kernel
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Missed this in a previous PR
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,864,195,740
|
[test] sccache log
|
clee2000
|
open
|
[
"ciflow/trunk",
"release notes: releng",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,864,185,350
|
[ONNX] Migrate onnx ops decomp functions
|
justinchuby
|
open
|
[
"module: onnx",
"open source",
"release notes: onnx",
"topic: new features"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147472
* __->__ #147469
This is the main PR that adds the onnx decomp functions from onnxscript to pytorch to decouple torch.onnx from implementations in onnxscript. Details of this migration, including how the logic is tested are described in https://github.com/pytorch/pytorch/issues/139301#issuecomment-2661017047.
## Guide for reviewers
The PR include three parts:
- Addition of the decomp logic. These are self contained onnxscript implementations under `_torchlib/ops`. They are individually tested by `test/onnx/torchlib/test_ops.py` using the added metadata in `test/onnx/torchlib/ops_test_data.py` and `test/onnx/torchlib/extra_opinfo.py`.
- Match and replacement was done on the `onnxscript` torchlib code to replace `"aten::xxx"` keys to `aten.xxx` torch op overload objects for accurate registration in core.
- Removal of redundant bridging logic: `torch/onnx/_internal/exporter/_registration.py` and `torch/onnx/_internal/exporter/_ir_passes.py` has logic to handle the decomp when they were out of core. These logic were removed.
- Removal of a test: `test/onnx/exporter/test_small_models_e2e.py` has a single test for torchvision, which we do not migrate into core. So we remove it until we have a better plan to support torchvision. It is deemed acceptible because torchvision supoport was only a demo in the new exporter previously, and there are way to register torchvision functions using the `torch.onnx.export(..., dynamo=True, custom_translation_table=...)` api for users.
### Test runtime
Added tests finish within 40 seconds on a 10-core local machine.
### Previous work done
Necessary refactoring are done in #147396 and scafolding of tests and logic are added in #147392.
### Migration source
Synced with ONNX Script at https://github.com/microsoft/onnxscript/commit/6f9533e480b618a4c606e678b7754a1bd9cad183
### Issue fixed
Fix https://github.com/pytorch/pytorch/issues/139301
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
| true
|
2,864,159,828
|
[Triton] [Upstream] FlexDecoding Test Failures opToStageAndCluster[&op].first < numStages && "Op with invalid stage!
|
drisspg
|
closed
|
[
"triaged",
"oncall: pt2",
"upstream triton",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 6
|
CONTRIBUTOR
|
# Summary
Setup: checkout and install Triton: f73cf3268ef04d862493e0fc1cca5257f2a09346
Checkout recent main of PyTorch
Script to reproducer
```py
"""
Standalone test file for max_autotune with captured buffers.
This file brings together the test along with all supporting functions,
decorators, and helper classes so that you can run it as a standalone file.
"""
import functools
from collections import namedtuple
from typing import Callable, Optional
import unittest
from unittest import skipUnless
from unittest.mock import patch
import torch
from torch.testing._internal.common_cuda import PLATFORM_SUPPORTS_BF16
# Imports from inductor and attention modules
from torch._inductor.test_case import TestCase as InductorTestCase
from torch.nn.attention.experimental._paged_attention import PagedAttention
from torch.nn.attention.flex_attention import (
BlockMask,
create_block_mask,
flex_attention,
noop_mask,
)
from torch.utils._triton import has_triton
# -------------------- Global definitions --------------------
Tolerances = namedtuple("Tolerances", ["atol", "rtol"])
torch.set_float32_matmul_precision("high")
# Aliases
index = torch.ops.aten.index
Tensor = torch.Tensor
# Create attention partial function
def create_attention(score_mod, block_mask, enable_gqa=False):
return functools.partial(
flex_attention,
score_mod=score_mod,
block_mask=block_mask,
enable_gqa=enable_gqa,
)
def create_block_mask_test(score_mod, query, key):
block_mask = create_block_mask(
score_mod, 1, 1, query.shape[-2], key.shape[-2], query.device
)
return block_mask
# Test dtypes and page sizes
test_dtypes = (
[torch.float16, torch.bfloat16, torch.float32]
if PLATFORM_SUPPORTS_BF16
else [torch.float16, torch.float32]
)
test_dtypes_fast = [torch.float16]
test_page_sizes = [64, 128, 256]
# --------- Useful score mod functions for testing ---------
# Dimensions for test tensors
B = 4
S = 2048
D = 64
(Hq, Hkv) = (16, 8)
test_Hq_Hkv = [
(16, 1),
(8, 2),
(16, 16),
]
test_Bq_Bkv = [
(3, 1),
(5, 1),
(8, 1),
(16, 1),
]
test_block_size = [
64,
128,
(1, 64),
(128, 64),
]
# Helper function to clone query, key, value tensors
def query_key_value_clones(
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
dtype: torch.dtype = None,
):
"""Clones the query, key, and value tensors and moves them to the specified dtype."""
if dtype is None:
dtype = query.dtype
query_ref = query.detach().clone().to(dtype).requires_grad_(query.requires_grad)
key_ref = key.detach().clone().to(dtype).requires_grad_(key.requires_grad)
value_ref = value.detach().clone().to(dtype).requires_grad_(value.requires_grad)
return query_ref, key_ref, value_ref
# Helper function to reserve batch entries in paged attention
def batch_reserve(paged_attention: PagedAttention, target_seq_len: torch.Tensor):
(B,) = target_seq_len.shape
for b in range(B):
paged_attention.reserve(
torch.tensor(b),
target_seq_len[b],
)
# -------------------- Test Class Definition --------------------
class TestFlexDecoding(InductorTestCase):
def _check_equal(
self,
golden_out: torch.Tensor,
ref_out: torch.Tensor,
compiled_out: torch.Tensor,
fudge_factor: float,
tensor_name: Optional[str] = None,
):
compiled_error = (golden_out - compiled_out).abs().mean()
ref_error = (golden_out - ref_out).abs().mean()
if torch.isnan(compiled_error).any() and not torch.isnan(ref_error).any():
self.assertTrue(False, "Output/Grad with NaN")
if ref_error < (1e-4) * golden_out.abs().mean():
print(
"very small ref error of ",
(ref_error.to(torch.float64) * (1e5) / golden_out.abs().mean()),
)
tolerance = Tolerances(atol=2e-1, rtol=2e-1)
torch.testing.assert_close(
golden_out.to(dtype=compiled_out.dtype),
compiled_out,
atol=tolerance.atol,
rtol=tolerance.rtol,
)
elif compiled_error > ref_error * fudge_factor:
name = tensor_name if tensor_name is not None else ""
msg = f"{name} Compiled error {compiled_error} is greater than ref error {ref_error} by more than {fudge_factor}X."
self.assertTrue(False, msg)
def _check_out(
self,
golden_out: torch.Tensor,
ref_out: torch.Tensor,
compiled_out: torch.Tensor,
):
dtype = ref_out.dtype
with torch.no_grad():
# Note: when using float32 the softmax computation might be less accurate
if dtype == torch.float32:
fudge_factor = 10.0
else:
fudge_factor = 1.1
self._check_equal(golden_out, ref_out, compiled_out, fudge_factor, "Out")
def preprocess_paged_attention(
self,
score_mod: Optional[Callable],
q: Tensor,
k: Tensor,
v: Tensor,
block_mask,
dtype: torch.dtype = torch.float16,
page_size: int = 128,
):
assert block_mask is not None, "Must provide block_mask"
Q_B, Q_H, Q_S, _ = q.shape
KV_B, KV_H, KV_S, QK_D = k.shape
_, _, _, V_D = v.shape
# Use a larger batch size for testing
max_batch_size = max(Q_B, KV_B) + 3
n_pages = (KV_S + page_size - 1) // page_size * max_batch_size
MAX_CACHED_SEQ_LEN = n_pages * page_size
k_cache = torch.zeros(
1,
KV_H,
MAX_CACHED_SEQ_LEN,
QK_D,
device="cuda",
dtype=dtype,
)
v_cache = torch.zeros(
1,
KV_H,
MAX_CACHED_SEQ_LEN,
V_D,
device="cuda",
dtype=dtype,
)
# "Randomly" initialize the page table
paged_attention = PagedAttention(n_pages, page_size, max_batch_size)
batch_reserve(
paged_attention,
torch.tensor([KV_S // 4, KV_S // 2, KV_S // 4, KV_S // 3], device="cuda"),
)
batch_reserve(
paged_attention,
torch.tensor([KV_S // 4, KV_S // 2, KV_S // 2, KV_S // 2], device="cuda"),
)
batch_reserve(
paged_attention,
torch.tensor([KV_S // 2, KV_S, KV_S // 2, KV_S], device="cuda"),
)
batch_reserve(
paged_attention, torch.tensor([KV_S, KV_S, KV_S, KV_S], device="cuda")
)
input_pos = (
torch.arange(KV_S, device="cuda", dtype=torch.int32)
.unsqueeze(0)
.expand(KV_B, KV_S)
)
batch_idx = torch.arange(KV_B, device="cuda", dtype=torch.int32)
paged_attention.assign(batch_idx, input_pos, k, v, k_cache, v_cache)
converted_block_mask = paged_attention.convert_logical_block_mask(block_mask)
converted_score_mod = paged_attention.get_score_mod(score_mod)
return k_cache, v_cache, converted_block_mask, converted_score_mod
def run_paged_attention(
self,
score_mod: Optional[Callable],
q: Tensor,
k: Tensor,
v: Tensor,
dtype: torch.dtype = torch.float16,
block_mask: Optional[BlockMask] = None,
):
Q_B, Q_H, _ = q.shape[0], q.shape[1], k.shape[1]
if block_mask is None:
block_mask = create_block_mask(noop_mask, Q_B, 1, 1, S)
(
k_cache,
v_cache,
converted_block_mask,
converted_score_mod,
) = self.preprocess_paged_attention(
score_mod, q, k, v, block_mask, dtype, block_mask.BLOCK_SIZE[1]
)
compiled_sdpa = torch.compile(flex_attention)
compiled_out, compiled_lse = compiled_sdpa(
q,
k_cache,
v_cache,
return_lse=True,
block_mask=converted_block_mask,
score_mod=converted_score_mod,
enable_gqa=(not q.shape[1] == k.shape[1]),
)
return compiled_out, compiled_lse
def run_test_with_paged_attention(
self,
score_mod: Optional[Callable],
dtype: torch.dtype = torch.float16,
Q_B: int = B,
Q_H: int = Hq,
Q_S: int = 1,
QK_D: int = D,
KV_B: int = B,
KV_H: int = Hkv,
KV_S: int = S,
V_D: int = D,
block_mask: Optional[BlockMask] = None,
):
assert Q_H % KV_H == 0
q = torch.randn(
(Q_B, Q_H, Q_S, QK_D),
dtype=dtype,
device="cuda",
requires_grad=False,
)
k = torch.randn(
(KV_B, KV_H, KV_S, QK_D),
dtype=dtype,
device="cuda",
requires_grad=False,
)
v = torch.randn(
(KV_B, KV_H, KV_S, V_D),
dtype=dtype,
device="cuda",
requires_grad=False,
)
q_ref, k_ref, v_ref = query_key_value_clones(q, k, v)
q_gold, k_gold, v_gold = query_key_value_clones(q, k, v, torch.float64)
if block_mask is None:
block_mask = create_block_mask(noop_mask, Q_B, 1, 1, KV_S)
sdpa_partial = create_attention(
score_mod, block_mask, enable_gqa=(not Q_H == KV_H)
)
golden_out, gold_lse = sdpa_partial(q_gold, k_gold, v_gold, return_lse=True)
ref_out, ref_lse = sdpa_partial(q_ref, k_ref, v_ref, return_lse=True)
compiled_out, compiled_lse = self.run_paged_attention(
score_mod, q, k, v, dtype, block_mask
)
self._check_out(golden_out, ref_out, compiled_out)
self._check_out(gold_lse, ref_lse, compiled_lse)
# -------------------- The Test Method --------------------
@patch.object(torch._inductor.config, "max_autotune", True)
def test_max_autotune_with_captured(self):
# Create captured buffers
head_scale = torch.randn(Hq, device="cuda")
batch_scale = torch.randn(B, device="cuda")
tok_scale = torch.randn(S, device="cuda")
q_scale = torch.randn(1, device="cuda")
def bias_mod(score, batch, head, token_q, token_kv):
score = score + tok_scale[token_kv]
score = score + q_scale[token_q]
score = score + batch_scale[batch]
score = score + head_scale[head]
return score
self.run_test_with_paged_attention(bias_mod)
# -------------------- Main --------------------
if __name__ == "__main__":
# Run the test using unittest's CLI.
unittest.main()
```
cc @chauhang @penguinwu @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov @zou3519 @ydwu4 @bdhirsh @Chillee @yanboliang @BoyuanFeng
| true
|
2,864,067,105
|
not for land: just testing
|
vkuzo
|
closed
|
[
"release notes: quantization",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Fixes #ISSUE_NUMBER
| true
|
2,864,050,383
|
add the `torch.float8_e8m0fnu` dtype to PyTorch
|
vkuzo
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 8
|
CONTRIBUTOR
|
Summary:
Continuing the work from https://github.com/pytorch/pytorch/pull/146427
Adds the `torch.float8_e8m0fnu` dtype to PyTorch, as detailed in
https://github.com/pytorch/pytorch/issues/146414 . Please see the issue for a detailed definition of the format. Example of basic functionality:
```python
import torch
# round trip
x0 = torch.randn(4, 4, dtype=torch.float32)
x1 = x0.to(torch.float8_e8m0fnu) # RNE rounding
x2 = x1.to(torch.float32) # 2 ** exponent
# creation with empty
x0 = torch.empty(4, 4, dtype=torch.float8_e8m0fnu)
# printing
print(x0)
```
Done in this PR:
* numerical correctness
* op coverage (except for `torch._scaled_mm`): create tensor, cast to/from float32
* printing a tensor works
For future PRs:
* performance optimizations for casting
* torch._scaled_mm
* PT2
* various cleanups (detailed in comments with issue numbers)
Test Plan:
```
pytest test/quantization/core/experimental/test_float8.py -s
```
Reviewers:
Subscribers:
Tasks:
Tags:
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,864,046,480
|
TCPStore: soft fail bind when agent store active
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
MEMBER
|
This makes it easier to roll out `TORCHELASTIC_USE_AGENT_STORE` by opportunistically swallowing bind errors when the agent store is enabled and the port matches `MASTER_PORT`.
This should be very safe as if the store is somehow not up and the envs are set, the TCPStore client connections will fail to connect so we end up with a slightly different error message but success/failure behavior is identical.
This also pybinds `c10d::SocketError` into Python so we can assert on the error type in tests.
https://docs.google.com/document/d/1CzOn_N53AiFxWGgbyMWSnd2elCJd4lZ-ajPg2lzcxoM/edit?tab=t.0#heading=h.2j2f5dimrdau
Test plan:
```
pytest test/distributed/test_store.py
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o
| true
|
2,864,026,073
|
Add current cuda device index to FXGraphCache key
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147464
This PR intends to fix the cache related issues from https://github.com/pytorch/pytorch/issues/147405.
It does *not* handle the dynamo recompile case in process, because it does not introduce any extra guards. For FXGraphCache and AOTAutogradCache, we simply have to have the device context in the cache key.
Note that for any function that accepts tensor inputs, the device context is naturally already included in the cache key by the metadata of example inputs. However, for functions that return constants or have no arguments, the device context still needs to be in the cache key.
A more robust fix for this would be to have inductor generate device guards that are dynamic, instead of specialized. This would also help us share more cache artifacts.
I've added unit tests for FXGraphCache and AOTAutogradCache, both of which would fail without this change.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D69875939](https://our.internmc.facebook.com/intern/diff/D69875939)
| true
|
2,863,941,403
|
Performance Regression nightly 2025/02/08→02/09, on nanogpt speedrun
|
YouJiacheng
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 17
|
CONTRIBUTOR
|
### 🐛 Describe the bug
On 3.28 track: torch-2.7.0.dev20250209 is 2 seconds slower than torch-2.7.0.dev20250208 .
On 2.92 track:
02/08: ~1470s
02/09: ~1483s
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250209+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.9 (main, Feb 5 2025, 19:10:45) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 168
On-line CPU(s) list: 0-161
Off-line CPU(s) list: 162-167
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 42
Socket(s): 2
Stepping: 8
BogoMIPS: 5199.53
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.9 MiB (84 instances)
L1i cache: 2.6 MiB (84 instances)
L2 cache: 168 MiB (84 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-83
NUMA node1 CPU(s): 84-167
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250209+cu126
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,863,800,946
|
add the torch.float8_e8m0fnu` dtype to PyTorch
|
vkuzo
|
closed
|
[
"module: cpu",
"ciflow/trunk",
"release notes: quantization"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147462
Summary:
Continuing the work from https://github.com/pytorch/pytorch/pull/146427.
Adds the `torch.float8_e8m0fnu` dtype to PyTorch, as detailed in
https://github.com/pytorch/pytorch/issues/146414 . Please see the issue for a detailed definition of the format. Example of basic functionality:
```python
import torch
# round trip
x0 = torch.randn(4, 4, dtype=torch.float32)
x1 = x0.to(torch.float8_e8m0fnu) # RNE rounding
x2 = x1.to(torch.float32) # 2 ** exponent
# creation with empty
x0 = torch.empty(4, 4, dtype=torch.float8_e8m0fnu)
# printing
print(x0)
```
Done in this PR:
* numerical correctness
* op coverage (except for `torch._scaled_mm`): create tensor, cast to/from float32
* printing a tensor works
For future PRs:
* performance optimizations for casting
* torch._scaled_mm
* PT2
* various cleanups (detailed in comments with issue numbers)
Test Plan:
```
pytest test/quantization/core/experimental/test_float8.py -s
```
Reviewers:
Subscribers:
Tasks:
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
Differential Revision: [D69860805](https://our.internmc.facebook.com/intern/diff/D69860805)
| true
|
2,863,778,374
|
[Easy] Add Delimeter To Show Where Allocation Addr Begins
|
sraikund16
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler",
"topic: improvements"
] | 7
|
CONTRIBUTOR
|
Summary: When we print the addr we append an "s" or a "b" to the beginning of an addr. Since the addr is in hex, a user might be confused and think the "b" is part of the address. Added an approstrophe to clear this up
Test Plan: CI
Differential Revision: D69828538
| true
|
2,863,754,153
|
[ROCm] scaled_dot_product_attention using mem-efficient backend (aotriton) produces wrong outputs with custom attn_mask on torch 2.6.0+rocm6.2.4
|
fxmarty-amd
|
closed
|
[
"module: rocm",
"triaged",
"module: sdpa"
] | 3
|
NONE
|
### 🐛 Describe the bug
Hi,
As discussed on slack and on https://github.com/huggingface/transformers/issues/30056#issuecomment-2657390613, SDPA with custom attn_mask using mem-efficient backend (aotriton 0.8.0) produces wrong outputs on torch 2.6.0 stable ROCm release. This is fixed on torch nightly that uses aotriton 0.8.2.
Reproduction:
```python
import torch
from torch.nn.attention import SDPBackend, sdpa_kernel
from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask_for_sdpa
batch_size = 2
num_heads = 32
head_dim = 128
num_tokens_q = 7
num_tokens_kv = num_tokens_q
device= "cuda"
dtype = torch.float16
num_pad_tokens = 3
query = torch.rand(batch_size, num_heads, num_tokens_q, head_dim, dtype=dtype, device=device) - 0.5
key = torch.rand(batch_size, num_heads, num_tokens_q, head_dim, dtype=dtype, device=device) - 0.5
value = torch.rand(batch_size, num_heads, num_tokens_q, head_dim, dtype=dtype, device=device) - 0.5
attn_mask_2d = torch.ones(batch_size, num_tokens_q, dtype=torch.int32, device=device)
attn_mask_2d[1][:num_pad_tokens] = 0 # simulate padding
attn_mask_4d = _prepare_4d_causal_attention_mask_for_sdpa(
attn_mask_2d,
input_shape=(batch_size, num_tokens_q),
inputs_embeds=query, # this is only used to retrieve device, dtype.
past_key_values_length=0,
)
print("attn_mask_4d", attn_mask_4d)
with sdpa_kernel(SDPBackend.EFFICIENT_ATTENTION):
sdpa_out_efficient = torch.nn.functional.scaled_dot_product_attention(
query,
key,
value,
attn_mask=attn_mask_4d
)
with sdpa_kernel(SDPBackend.MATH):
sdpa_out_math = torch.nn.functional.scaled_dot_product_attention(
query,
key,
value,
attn_mask=attn_mask_4d
)
with sdpa_kernel(SDPBackend.MATH):
sdpa_out_math_cpu = torch.nn.functional.scaled_dot_product_attention(
query.cpu(),
key.cpu(),
value.cpu(),
attn_mask=attn_mask_4d.cpu()
)
print("[rocm math vs rocm mem-efficient] Median abs diff, non padded sequence:", (sdpa_out_efficient[0] - sdpa_out_math[0]).abs().median())
print("[rocm math vs rocm mem-efficient] Max abs diff, non padded sequence:", (sdpa_out_efficient[0] - sdpa_out_math[0]).abs().max())
print("[rocm math vs rocm mem-efficient] Median abs diff, padded sequence:", (sdpa_out_efficient[1, :, num_pad_tokens:] - sdpa_out_math[1, :, num_pad_tokens:]).abs().median())
print("[rocm math vs rocm mem-efficient] Max abs diff, padded sequence:", (sdpa_out_efficient[1, :, num_pad_tokens:] - sdpa_out_math[1, :, num_pad_tokens:]).abs().max())
sdpa_out_efficient = sdpa_out_efficient.cpu()
print("\n[cpu math vs rocm mem-efficient] Median abs diff, non padded sequence:", (sdpa_out_math_cpu[0] - sdpa_out_efficient[0]).abs().median())
print("[cpu math vs rocm mem-efficient] Max abs diff, non padded sequence:", (sdpa_out_math_cpu[0] - sdpa_out_efficient[0]).abs().max())
print("[cpu math vs rocm mem-efficient] Median abs diff, padded sequence:", (sdpa_out_math_cpu[1, :, num_pad_tokens:] - sdpa_out_efficient[1, :, num_pad_tokens:]).abs().median())
print("[cpu math vs rocm mem-efficient] Max abs diff, padded sequence:", (sdpa_out_math_cpu[1, :, num_pad_tokens:] - sdpa_out_efficient[1, :, num_pad_tokens:]).abs().max())
sdpa_out_math = sdpa_out_math.cpu()
print("\n[cpu math vs rocm math] Median abs diff, non padded sequence:", (sdpa_out_math_cpu[0] - sdpa_out_math[0]).abs().median())
print("[cpu math vs rocm math] Max abs diff, non padded sequence:", (sdpa_out_math_cpu[0] - sdpa_out_math[0]).abs().max())
print("[cpu math vs rocm math] Median abs diff, padded sequence:", (sdpa_out_math_cpu[1, :, num_pad_tokens:] - sdpa_out_math[1, :, num_pad_tokens:]).abs().median())
print("[cpu math vs rocm math] Max abs diff, padded sequence:", (sdpa_out_math_cpu[1, :, num_pad_tokens:] - sdpa_out_math[1, :, num_pad_tokens:]).abs().max())
```
which gives
```
attn_mask_4d tensor([[[[ 0., -65504., -65504., -65504., -65504., -65504., -65504.],
[ 0., 0., -65504., -65504., -65504., -65504., -65504.],
[ 0., 0., 0., -65504., -65504., -65504., -65504.],
[ 0., 0., 0., 0., -65504., -65504., -65504.],
[ 0., 0., 0., 0., 0., -65504., -65504.],
[ 0., 0., 0., 0., 0., 0., -65504.],
[ 0., 0., 0., 0., 0., 0., 0.]]],
[[[ -0., -0., -0., -0., -0., -0., -0.],
[ -0., -0., -0., -0., -0., -0., -0.],
[ -0., -0., -0., -0., -0., -0., -0.],
[-65504., -65504., -65504., 0., -65504., -65504., -65504.],
[-65504., -65504., -65504., 0., 0., -65504., -65504.],
[-65504., -65504., -65504., 0., 0., 0., -65504.],
[-65504., -65504., -65504., 0., 0., 0., 0.]]]],
device='cuda:0', dtype=torch.float16)
[rocm math vs rocm mem-efficient] Median abs diff, non padded sequence: tensor(0., device='cuda:0', dtype=torch.float16)
[rocm math vs rocm mem-efficient] Max abs diff, non padded sequence: tensor(0.0002, device='cuda:0', dtype=torch.float16)
[rocm math vs rocm mem-efficient] Median abs diff, padded sequence: tensor(0.0991, device='cuda:0', dtype=torch.float16)
[rocm math vs rocm mem-efficient] Max abs diff, padded sequence: tensor(0.6846, device='cuda:0', dtype=torch.float16)
[cpu math vs rocm mem-efficient] Median abs diff, non padded sequence: tensor(0., dtype=torch.float16)
[cpu math vs rocm mem-efficient] Max abs diff, non padded sequence: tensor(0.0002, dtype=torch.float16)
[cpu math vs rocm mem-efficient] Median abs diff, padded sequence: tensor(0.0991, dtype=torch.float16)
[cpu math vs rocm mem-efficient] Max abs diff, padded sequence: tensor(0.6846, dtype=torch.float16)
[cpu math vs rocm math] Median abs diff, non padded sequence: tensor(0., dtype=torch.float16)
[cpu math vs rocm math] Max abs diff, non padded sequence: tensor(6.1035e-05, dtype=torch.float16)
[cpu math vs rocm math] Median abs diff, padded sequence: tensor(0., dtype=torch.float16)
[cpu math vs rocm math] Max abs diff, padded sequence: tensor(6.1035e-05, dtype=torch.float16)
```
As we can see, SDPA on ROCm with mem-efficient attention gives wrong outputs. This causes issues in batched generation in Transformers: https://github.com/huggingface/transformers/issues/30056#issuecomment-2657390613
The root cause is a bug in aotriton 0.8.0 that is shipped with PyTorch 2.6.0+rocm6.2.4.
Using aotrition 0.8.2 (https://github.com/ROCm/aotriton/releases/tag/0.8.2b) fixes this issue, specifically grabbing the asset from https://github.com/ROCm/aotriton/releases/tag/0.8.2b and replacing `torch/lib/aotriton.images/` by the 0.8.2 release `aotriton.images/`.
Diff between math and mem-efficient is then much more reasonable:
```
[rocm math vs rocm mem-efficient] Median abs diff, non padded sequence: tensor(0., device='cuda:0', dtype=torch.float16)
[rocm math vs rocm mem-efficient] Max abs diff, non padded sequence: tensor(0.0002, device='cuda:0', dtype=torch.float16)
[rocm math vs rocm mem-efficient] Median abs diff, padded sequence: tensor(0., device='cuda:0', dtype=torch.float16)
[rocm math vs rocm mem-efficient] Max abs diff, padded sequence: tensor(0.0002, device='cuda:0', dtype=torch.float16)
[cpu math vs rocm mem-efficient] Median abs diff, non padded sequence: tensor(0., dtype=torch.float16)
[cpu math vs rocm mem-efficient] Max abs diff, non padded sequence: tensor(0.0002, dtype=torch.float16)
[cpu math vs rocm mem-efficient] Median abs diff, padded sequence: tensor(0., dtype=torch.float16)
[cpu math vs rocm mem-efficient] Max abs diff, padded sequence: tensor(0.0002, dtype=torch.float16)
[cpu math vs rocm math] Median abs diff, non padded sequence: tensor(0., dtype=torch.float16)
[cpu math vs rocm math] Max abs diff, non padded sequence: tensor(0.0001, dtype=torch.float16)
[cpu math vs rocm math] Median abs diff, padded sequence: tensor(0., dtype=torch.float16)
[cpu math vs rocm math] Max abs diff, padded sequence: tensor(4.7684e-07, dtype=torch.float16)
```
Using aotriton 0.8.2 in a torch patch release might be considered? cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @xinyazhang @atalman
Thank you!
### Versions
```
PyTorch version: 2.6.0+rocm6.2.4
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41134-65d174c3e
OS: Ubuntu 24.04 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI250X/MI250 (gfx90a:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41134
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD EPYC 73F3 16-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 61%
CPU max MHz: 4036.6211
CPU min MHz: 1500.0000
BogoMIPS: 6987.05
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-genai==0.5.2
[pip3] onnxsim==0.4.36
[pip3] pytorch-triton-rocm==3.2.0
[pip3] torch==2.6.0+rocm6.2.4
[pip3] torchaudio==2.6.0+rocm6.2.4
[pip3] torchvision==0.21.0+rocm6.2.4
[conda] Could not collect
```
| true
|
2,863,662,746
|
[ROCm] Fix sort for non-standard bool
|
pragupta
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/periodic",
"rocm",
"ciflow/rocm"
] | 16
|
CONTRIBUTOR
|
When converting from uint8 to bool using `view` op, we get a bool that has 0 for false and a non-zero value for true. However, these kinds of bool have undefined behavior. We only read the last bit as 0 or 1 to convert to false or true.
In this fix, we convert bools to uint8, which will convert false to 0 and non-zero value to 1. Essentially, converting non-standard bool to a standard bool and fixing the sort op for non-standard bool.
Fixes #139972
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,863,556,129
|
`torch.nn.functional.conv1d` can cause a `Floating point exception (core dumped)`
|
cybersupersoap
|
closed
|
[
"module: crash",
"module: nn",
"triaged",
"module: mkldnn",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
A `Floating point exception` will be raised when using `torch.nn.functional.conv1d`
```python
import torch
arg_1_tensor = torch.rand([20, 16, 50], dtype=torch.float32)
arg_1 = arg_1_tensor.clone()
arg_2_tensor = torch.rand([33, 16, 3], dtype=torch.float32)
arg_2 = arg_2_tensor.clone()
arg_3_tensor = torch.rand([33], dtype=torch.float32)
arg_3 = arg_3_tensor.clone()
arg_4_0 = 2**32
arg_4 = [arg_4_0]
arg_5_0 = 0
arg_5 = [arg_5_0]
arg_6_0 = 1
arg_6 = [arg_6_0]
arg_7 = 1
res = torch.nn.functional.conv1d(arg_1, arg_2, arg_3, arg_4, arg_5, arg_6, arg_7)
```
Error messages:
```
Floating point exception (core dumped)
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @wenzhe-nrv
| true
|
2,863,539,565
|
`torch.svd` can cause an `INTERNAL ASSERT FAILED`
|
cybersupersoap
|
closed
|
[
"module: crash",
"triaged",
"module: mkl",
"module: third_party",
"module: linear algebra",
"topic: fuzzer"
] | 3
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using `torch.svd`
```python
import torch
arg_1_tensor = torch.rand([2, 2**31], dtype=torch.float32) # Setting the size of the input data to a large value
arg_1 = arg_1_tensor.clone()
arg_2 = False
res = torch.svd(arg_1, compute_uv=arg_2)
```
Error messages:
```
Intel oneMKL ERROR: Parameter 3 was incorrect on entry to SGESDD.
false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. linalg.svd: Argument 3 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,863,529,315
|
`torch.cholesky_solve` can cause an "INTERNAL ASSERT FAILED"
|
cybersupersoap
|
closed
|
[
"module: crash",
"module: mkl",
"module: linear algebra",
"topic: fuzzer"
] | 2
|
NONE
|
### 🐛 Describe the bug
An INTERNAL ASSERT error will be raised when using `torch.cholesky_solve`
```python
import torch
arg_1_tensor = torch.rand([3, 2**31], dtype=torch.float32)
arg_1 = arg_1_tensor.clone()
arg_2_tensor = torch.rand([3, 3], dtype=torch.float32)
arg_2 = arg_2_tensor.clone()
res = torch.cholesky_solve(arg_1, arg_2)
```
Error messages:
```
Intel oneMKL ERROR: Parameter 3 was incorrect on entry to SPOTRS.
false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. cholesky_solve_cpu: Argument 3 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
The error is reproducible with the nightly-build version 2.7.0.dev20250208+cpu .
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,863,288,583
|
DRAFT Revert "[ATen][CUDA] Implement 128 bit vectorization v2 (#145746)"
|
atalman
|
open
|
[
"Stale",
"ciflow/binaries_wheel"
] | 3
|
CONTRIBUTOR
|
This reverts commit e84bf88dde509d44175a0a1c00cec13c9926843e.
Fixes #ISSUE_NUMBER
| true
|
2,863,265,662
|
Update Arm Compute Library (ACL) to v25.02
|
fadara01
|
closed
|
[
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"release notes: releng",
"ciflow/linux-aarch64"
] | 9
|
COLLABORATOR
|
Among many things, this version of ACL fixes the redundant declaration warning that we're blocked on in (#145942, #146620, #147337) and introduces better scheduling heuristics for GEMMs
Fixes #ISSUE_NUMBER
cc @malfet @snadampal @milpuz01 @annop-w
| true
|
2,862,839,728
|
[Inductor] Fix `torch.polygamma()` when n == 1
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 24
|
CONTRIBUTOR
|
Fixes #147450
Be consistent with cpu kernel:
https://github.com/pytorch/pytorch/blob/77dbd2853599a0f8245975746df9b2ff31f86d25/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp#L433-L444
Got this in the case:
```
Eager: tensor([1.2914e+15]), dtype: torch.float32
Compile: tensor([1.2914e+15]), dtype: torch.float32
Expected: tensor([6.5808e+32], dtype=torch.float64), dtype: torch.float64
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,862,813,188
|
Reland "Introduce new template heuristic for triton autotune configs"
|
jataylo
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ci-no-td",
"ciflow/inductor-rocm"
] | 56
|
COLLABORATOR
|
This change was reverted in https://github.com/pytorch/pytorch/pull/147388 for regressing an internal workload.
I have removed the additional ir.device_type calls in mm_scaled and unpack_mixed_mm.py which could be contributing to the additional compile time.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,862,757,509
|
sdfasdf
|
AnnaTrainingG
|
closed
|
[] | 0
|
NONE
|
### 🐛 Describe the bug
0.0778 | 0.6107 | 0.6592 | 0.3743
-- | -- | -- | --
0.0068 | 0.1044 | 0.0728 | 0.05
452700000 | 141926 | 18216000 | 70436
-0.0029126 | -0.0003828 | 0.00516129 | 0.00228267
</span>
### Versions
asdf
| true
|
2,862,745,855
|
polygamma is less precise on torch.compile when n == 1
|
maybeLee
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This is similar to my previous issue https://github.com/pytorch/pytorch/issues/143648.
When `n==1`, polygamma will execute `trigamma_kernel` under eager, but execute `cal_polygamma` under torch.compile. It seems that executing `trigamma_kernel` instead of `cal_polygamma` is more precise (please see below code):
```
import torch
f = torch.special.polygamma
cf = torch.compile(f)
n=1
input=torch.tensor([-1.0])
eager = f(n,input)
compile = cf(n,input)
expected = f(n, input.to(torch.float64))
print(f"Eager: {eager}, dtype: {eager.dtype}")
print(f"Compile: {compile}, dtype: {compile.dtype}")
print(f"Expected: {expected}, dtype: {expected.dtype}")
```
Output:
```
Eager: tensor([1.2914e+15]), dtype: torch.float32
Compile: tensor([inf]), dtype: torch.float32
Expected: tensor([6.5808e+32], dtype=torch.float64), dtype: torch.float64
```
Therefore, I suggest to add another PR (like the previous one https://github.com/pytorch/pytorch/pull/144058) so `polygamma` will call `trigamma_kernel` when `n==1`.
### Versions
PyTorch version: 2.7.0.dev20250109+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 74%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250109+cu126
[pip3] torchaudio==2.6.0.dev20250106+cu124
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250109+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250106+cu124 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,862,527,535
|
gradient checkpointing with use_reentrant=False cannot reduce peak memory
|
KK666-AI
|
open
|
[
"module: activation checkpointing",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
I am comparing the memory cost between `use_reentrant=False` and `use_reentrant=True` when using gradient checkpointing. When set `use_reentrant=False`, i find the peak memory is exactly the same with the one without using gradient checkpointing.
The following is my script, suppose the script name is `test.py`
```
import argparse
import torch
import torch.nn as nn
import GPUtil
def get_gpu_memory():
memory_dict = dict()
gpus = GPUtil.getGPUs()
for gpu in gpus:
total = gpu.memoryTotal / 1024 # GB
used = gpu.memoryUsed / 1024 # GB
free = gpu.memoryFree / 1024 # GB
memory = {
"total": total,
"used": used,
"free": free,
"used_ratio": used/total
}
memory_dict[gpu.id] = memory
return memory_dict
class CustomLayer(nn.Module):
def __init__(self, d_model, nhead):
super(CustomLayer, self).__init__()
self.self_attn = nn.MultiheadAttention(d_model, nhead)
self.linear1 = nn.Linear(d_model, d_model * 4)
self.linear2 = nn.Linear(d_model * 4, d_model)
self.norm1 = nn.LayerNorm(d_model)
self.norm2 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(0.1)
def forward(self, x):
# Self-attention
attn_output, _ = self.self_attn(x, x, x)
x = self.norm1(x + attn_output) # Residual connection + normalization
# Feedforward network
ff_output = self.linear1(x)
ff_output = torch.relu(ff_output)
ff_output = self.linear2(ff_output)
x = self.norm2(x + ff_output) # Residual connection + normalization
x = self.dropout(x) # test dropout.
return x
class Model(nn.Module):
def __init__(self, d_model, nhead, num_layers, use_checkpointing=False, use_reentrant=False):
super(Model, self).__init__()
self.linear = nn.Linear(d_model, d_model)
self.layers = nn.ModuleList([CustomLayer(d_model, nhead) for _ in range(num_layers)])
self.final_layer = nn.Linear(d_model, 1) # Example output layer
self.use_checkpointing = use_checkpointing
self.use_reentrant = use_reentrant
def _forward_layers(self, layer_group, x):
for layer in layer_group:
x = layer(x)
return x
def native_forward(self, x):
for layer in self.layers:
x = layer(x)
return x
def checkpointing_group_layers_forward(self, x):
interval = 4
for i in range(0, len(self.layers), interval):
layer_group = self.layers[i:min(i + interval, len(self.layers))] # Get a group of two layers
x = torch.utils.checkpoint.checkpoint(lambda x: self._forward_layers(layer_group, x), x, preserve_rng_state=False, use_reentrant=self.use_reentrant)
return x
def forward(self, x):
if self.use_checkpointing:
x = self.checkpointing_group_layers_forward(x)
else:
x = self.native_forward(x)
return self.final_layer(x)
def train(model, dataloader, optimizer, device, epoch):
model.train()
for epoch in range(epoch): # Number of epochs
for batch_idx, (data, target) in enumerate(dataloader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = nn.MSELoss()(output, target.view(-1, 1)) # Assuming target has a shape compatible with output
loss.backward() # Backward pass
optimizer.step() # Update model parameters
if batch_idx % 10 == 0:
memory_dict = get_gpu_memory()[0] # device 0
total = memory_dict["total"]
used = memory_dict["used"]
used_ratio = memory_dict["used_ratio"]
print(f"Epoch: {epoch}, Batch: {batch_idx}, Loss: {loss.item()}, "
f"total={'%.2f' % total}, used={'%.2f' % used}, used_ratio={'%.2f' % used_ratio}")
def main(args):
data = torch.randn(args.seq_len, args.dim) # (sequence_length, dim)
target = torch.randn(args.seq_len, 1) # Assume a regression task
dataset = torch.utils.data.TensorDataset(data, target)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size, shuffle=True, drop_last=True)
# Initialize the model, optimizer, and move to device
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = Model(args.dim, args.nhead, args.num_layers, args.use_checkpointing, args.use_reentrant).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# Train the model
train(model, dataloader, optimizer, device, args.epoch)
print("end.")
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument(
"--seq_len",
type=int,
default=2000,
help="sequence length",
)
parser.add_argument(
"--dim",
type=int,
default=1024 * 2,
help="dimension",
)
parser.add_argument(
"--nhead",
type=int,
default=4,
help="number of head",
)
parser.add_argument(
"--num_layers",
type=int,
default=64,
help="number of layers",
)
parser.add_argument(
"--batch_size",
type=int,
default=12,
help="batch size",
)
parser.add_argument(
"--epoch",
type=int,
default=2,
help="epoch",
)
parser.add_argument(
"--use_checkpointing",
action="store_true",
default=False,
help="whether use checkpointing",
)
parser.add_argument(
"--use_reentrant",
action="store_true",
default=False,
help="whether use reentrant",
)
args = parser.parse_args()
return args
if __name__ == "__main__":
args = get_args()
main(args)
```
I run three experients:
1) run experient without gradient checkpointing `python -u test.py` , the peak memory cost is 60.90GB.
```
Epoch: 0, Batch: 0, Loss: 1.738878607749939, total=79.65, used=60.89, used_ratio=0.76
Epoch: 0, Batch: 10, Loss: 2.8586766719818115, total=79.65, used=60.90, used_ratio=0.76
Epoch: 0, Batch: 20, Loss: 1.9227111339569092, total=79.65, used=60.90, used_ratio=0.76
```
2) run experient with gradient checkpointing, but set `use_reentrant=False`, `python -u test.py --use_checkpointing`, the peak memory cost is 60.84GB
```
Epoch: 0, Batch: 0, Loss: 1.4675014019012451, total=79.65, used=60.84, used_ratio=0.76
Epoch: 0, Batch: 10, Loss: 1.1877995729446411, total=79.65, used=60.84, used_ratio=0.76
Epoch: 0, Batch: 20, Loss: 1.4552075862884521, total=79.65, used=60.84, used_ratio=0.76
```
3) run experient with gradient checkpointing, but set `use_reentrant=True`, `python -u test.py --use_checkpointing --use_reentrant`, the peak memory cost is 12.82GB
```
/opt/conda/envs/folding/lib/python3.11/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
Epoch: 0, Batch: 0, Loss: 1.1766417026519775, total=79.65, used=12.82, used_ratio=0.16
/opt/conda/envs/folding/lib/python3.11/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
Epoch: 0, Batch: 10, Loss: 0.6375870704650879, total=79.65, used=12.82, used_ratio=0.16
/opt/conda/envs/folding/lib/python3.11/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
Epoch: 0, Batch: 20, Loss: 2.0402159690856934, total=79.65, used=12.82, used_ratio=0.16
/opt/conda/envs/folding/lib/python3.11/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.1.0-23-cloud-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.5.1+cu124
[pip3] torchaudio==2.5.1+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.5.1+cu124 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @soulitzer
| true
|
2,862,468,364
|
[XPU][DO NOT MERGE]Let XPU use VS2019
|
Stonepia
|
closed
|
[
"oncall: jit",
"module: windows",
"open source",
"ciflow/binaries",
"ciflow/trunk",
"release notes: releng",
"topic: not user facing",
"ciflow/binaries_wheel",
"ciflow/xpu",
"module: xpu"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
This PR tries to let XPU use VS2019 to build
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @fengyuan14 @guangyey
| true
|
2,862,457,025
|
[Export][AO] Include another batch_norm op in supported batch_norms
|
anzr299
|
closed
|
[
"open source",
"release notes: quantization",
"release notes: AO frontend"
] | 2
|
NONE
|
Running export with decomposition in PT export produces a graph with `aten._native_batch_norm_legit_functional.default` instead of other identified batch_norm ops in torch ao. This PR aims to include this op into torch AO utils.
```
class SimpleModel(torch.nn.Module):
def __init__(self, input_dim=10, hidden_dim=20, output_dim=5):
super().__init__()
self.fc1 = torch.nn.Linear(input_dim, output_dim)
self.bn1 = torch.nn.BatchNorm1d(output_dim)
def forward(self, x):
x = self.fc1(x)
x = self.bn1(x)
return x
def get_args(model, node_name):
for node in model.graph.nodes:
print(node.target)
if(node.name == node_name):
return node.args
ex_input = torch.randn((16,10))
ep_decomp = torch.export.export(SimpleModel(), args=(ex_input,)).run_decompositions().module()
print(ep_decomp.code)
```
Produces the following model:
```
def forward(self, x):
x, = fx_pytree.tree_flatten_spec(([x], {}), self._in_spec)
fc1_weight = self.fc1.weight
fc1_bias = self.fc1.bias
bn1_weight = self.bn1.weight
bn1_bias = self.bn1.bias
bn1_running_mean = self.bn1.running_mean
bn1_running_var = self.bn1.running_var
bn1_num_batches_tracked = self.bn1.num_batches_tracked
permute = torch.ops.aten.permute.default(fc1_weight, [1, 0]); fc1_weight = None
addmm = torch.ops.aten.addmm.default(fc1_bias, x, permute); fc1_bias = x = permute = None
add = torch.ops.aten.add.Tensor(bn1_num_batches_tracked, 1)
_native_batch_norm_legit_functional = torch.ops.aten._native_batch_norm_legit_functional.default(addmm, bn1_weight, bn1_bias, bn1_running_mean, bn1_running_var, True, 0.1, 1e-05); addmm = bn1_weight = bn1_bias = None
getitem = _native_batch_norm_legit_functional[0]
getitem_3 = _native_batch_norm_legit_functional[3]
getitem_4 = _native_batch_norm_legit_functional[4]; _native_batch_norm_legit_functional = None
copy__default = torch.ops.aten.copy_.default(bn1_running_mean, getitem_3); bn1_running_mean = getitem_3 = copy__default = None
copy__default_1 = torch.ops.aten.copy_.default(bn1_running_var, getitem_4); bn1_running_var = getitem_4 = copy__default_1 = None
copy__default_2 = torch.ops.aten.copy_.default(bn1_num_batches_tracked, add); bn1_num_batches_tracked = add = copy__default_2 = None
return pytree.tree_unflatten((getitem,), self._out_spec)
```
| true
|
2,862,455,933
|
[XPU] [DO NOT MERGE] Test for permissive flag with vs2019
|
Stonepia
|
closed
|
[
"oncall: jit",
"module: rocm",
"release notes: releng",
"module: xpu"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @gujinghui @fengyuan14 @guangyey
| true
|
2,862,444,836
|
[5/N] Remove unnecessary once flag usage
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 13
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan
| true
|
2,862,426,018
|
NaN values originating from the normalize_weight_jit function during the backward pass
|
Sz520zS
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.cuda.amp import autocast
from thirdparty.swish import Swish as SwishFN
from thirdparty.inplaced_sync_batchnorm import SyncBatchNormSwish
from thirdparty.checkpoint import checkpoint
from util.utils import average_tensor
from collections import OrderedDict
BN_EPS = 1e-5
SYNC_BN = True
OPS = OrderedDict([
('res_bnswish', lambda Cin, Cout, stride, dropout: BNSwishConv(Cin, Cout, 3, stride, 1)),
('res_bnswish_x2', lambda Cin, Cout, stride, dropout: BNSwishConvX2(Cin, Cout, 3, stride, 1)),
('res_gnswish_x2', lambda Cin, Cout, stride, dropout: GNSwishConv(Cin, Cout, 3, stride, 1, 1, dropout)),
('mconv_e6k5g0', lambda Cin, Cout, stride, dropout: InvertedResidual(Cin, Cout, stride, ex=6, dil=1, k=5, g=0)),
('mconv_e3k5g0', lambda Cin, Cout, stride, dropout: InvertedResidual(Cin, Cout, stride, ex=3, dil=1, k=5, g=0)),
('mconv_e6k5g0_gn', lambda Cin, Cout, stride, dropout: InvertedResidualGN(Cin, Cout, stride, ex=6, dil=1, k=5, g=0)),
('attn', lambda Cin, Cout, stride, dropout: Attention(Cin))
])
def get_skip_connection(Cin, Cout, stride): # 根据输入的 stride 值返回相应的跳跃连接(skip connection)
if stride == 1:
return Identity()
elif stride == 2:
return FactorizedReduce(Cin, Cout)
elif stride == -1:
return nn.Sequential(UpSample(), Conv2D(Cin, Cout, kernel_size=1))
def norm(t, dim):
return torch.sqrt(torch.sum(t * t, dim))
def logit(t):
return torch.log(t) - torch.log(1 - t)
def act(t):
# The following implementation has lower memory.
return SwishFN.apply(t)
class Swish(nn.Module):
def __init__(self):
super(Swish, self).__init__()
def forward(self, x):
return act(x)
# @torch.jit.script
# def normalize_weight_jit(log_weight_norm, weight): # 权重归一化
# n = torch.exp(log_weight_norm)
# wn = torch.sqrt(torch.sum(weight * weight, dim=[1, 2, 3])) # norm(w)
# weight = n * weight / (wn.unsqueeze(-1).unsqueeze(-1).unsqueeze(-1) + 1e-5)
# return weight
@torch.jit.script
def normalize_weight_jit(log_weight_norm, weight):
log_weight_norm = torch.clamp(log_weight_norm, min=-10, max=10) # 限制范围
n = torch.exp(log_weight_norm)
wn = torch.sqrt(torch.sum(weight * weight, dim=[1, 2, 3]))
weight = n * weight / (wn.unsqueeze(-1).unsqueeze(-1).unsqueeze(-1) + 1e-5) # 添加 epsilon
return weight
class Conv2D(nn.Conv2d):
def __init__(self, C_in, C_out, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=False, data_init=False,
weight_norm=True):
super(Conv2D, self).__init__(C_in, C_out, kernel_size, stride, padding, dilation, groups, bias)
self.log_weight_norm = None
if weight_norm:
init = norm(self.weight, dim=[1, 2, 3]).view(-1, 1, 1, 1)
# 初始化 log_weight_norm 时添加一个较小的正常数,避免初始值过小
self.log_weight_norm = nn.Parameter(torch.log(init + 1e-2), requires_grad=True) # 修改
self.data_init = data_init
self.init_done = False
self.weight_normalized = self.normalize_weight()
def forward(self, x):
if self.data_init and not self.init_done:
with torch.no_grad():
# 检查 self.weight 是否包含 NaN 或 Inf 值
if torch.isnan(self.weight).any() or torch.isinf(self.weight).any():
print("WARNING: self.weight contains NaN or Inf values before normalization!")
# 处理 NaN 或 Inf 值,例如替换为 0 或其他正常数
self.weight = torch.nan_to_num(self.weight) # 修改
weight = self.weight / (norm(self.weight, dim=[1, 2, 3]).view(-1, 1, 1, 1) + 1e-5)
bias = None
out = F.conv2d(x, weight, bias, self.stride, self.padding, self.dilation, self.groups)
# 检查 out 是否包含 NaN 或 Inf 值
if torch.isnan(out).any() or torch.isinf(out).any():
print("WARNING: out contains NaN or Inf values after convolution!")
# 处理 NaN 或 Inf 值
out = torch.nan_to_num(out) # 修改
mn = torch.mean(out, dim=[0, 2, 3])
st = 5 * torch.std(out, dim=[0, 2, 3])
# 检查 st 是否为零
if torch.all(st == 0):
print("WARNING: st is all zeros!")
# 处理 st 为零的情况,例如添加一个小的常数
st += 1e-5 # 修改
# get mn and st from other GPUs
average_tensor(mn, is_distributed=True)
average_tensor(st, is_distributed=True)
if self.bias is not None:
self.bias.data = - mn / (st + 1e-5)
# 对 log_weight_norm 的初始值进行限制,避免过大或过小
self.log_weight_norm.data = -torch.log(torch.clamp((st.view(-1, 1, 1, 1) + 1e-5), min=1e-5)) # 修改
self.init_done = True
self.weight_normalized = self.normalize_weight()
bias = self.bias
return F.conv2d(x, self.weight_normalized, bias, self.stride,
self.padding, self.dilation, self.groups)
def normalize_weight(self):
""" applies weight normalization """
if self.log_weight_norm is not None:
weight = normalize_weight_jit(self.log_weight_norm, self.weight)
else:
weight = self.weight
return weight
```
python train_vada.py --fid_dir $FID_STATS_DIR --data $DATA_DIR/cifar10 --root $CHECKPOINT_DIR --save $EXPR_ID/lsgm1 --vae_checkpoint /root/netdisk/LSGM_yuan/checkpoints/experiment_1/vae1/checkpoint.pt --train_vae --custom_conv_dae --apply_sqrt2_res --fir --dae_arch ncsnpp --embedding_scale 1000 --dataset cifar10 --learning_rate_dae 1e-4 --learning_rate_min_dae 1e-4 --epochs 1000 --dropout 0.2 --batch_size 16 --num_channels_dae 256 --num_scales_dae 3 --num_cell_per_scale_dae 8 --sde_type vpsde --beta_start 0.1 --beta_end 20.0 --sigma2_0 0.0 --weight_decay_norm_dae 1e-3 --weight_decay_norm_vae 1e-3 --time_eps 0.01 --train_ode_eps 1e-6 --eval_ode_eps 1e-6 --train_ode_solver_tol 1e-5 --eval_ode_solver_tol 1e-5 --iw_sample_p drop_all_iw --iw_sample_q reweight_p_samples --use_se --grad_clip_max_norm 1.0
ERROR:
/root/.pyenv/versions/3.8.0/lib/python3.8/site-packages/torch/autograd/__init__.py:173: UserWarning: Error detected in torch::jit::(anonymous namespace)::DifferentiableGraphBackward. Traceback of forward call that caused the error:
File "train_vada.py", line 526, in <module>
utils.init_processes(0, size, main, args)
File "/root/netdisk/LSGM_yuan/util/utils.py", line 694, in init_processes
fn(args)
File "train_vada.py", line 188, in main
train_obj, global_step = train_vada_joint(train_queue, diffusion_cont, dae, dae_optimizer, vae, vae_optimizer,
File "/root/netdisk/LSGM_yuan/training_obj_joint.py", line 46, in train_vada_joint
logits, all_log_q, all_eps = vae(x)
File "/root/.pyenv/versions/3.8.0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/root/netdisk/LSGM_yuan/nvae.py", line 298, in forward
logits = self.image_conditional(s)
File "/root/.pyenv/versions/3.8.0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/root/.pyenv/versions/3.8.0/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/root/.pyenv/versions/3.8.0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/root/netdisk/LSGM_yuan/layers/neural_operations.py", line 178, in forward
self.weight_normalized = self.normalize_weight()
File "/root/netdisk/LSGM_yuan/layers/neural_operations.py", line 187, in normalize_weight
weight = normalize_weight_jit(self.log_weight_norm, self.weight)
(Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:104.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "train_vada.py", line 526, in <module>
utils.init_processes(0, size, main, args)
File "/root/netdisk/LSGM_yuan/util/utils.py", line 694, in init_processes
fn(args)
File "train_vada.py", line 188, in main
train_obj, global_step = train_vada_joint(train_queue, diffusion_cont, dae, dae_optimizer, vae, vae_optimizer,
File "/root/netdisk/LSGM_yuan/training_obj_joint.py", line 133, in train_vada_joint
grad_scalar.scale(q_loss).backward(retain_graph=utils.different_p_q_objectives(args.iw_sample_p, args.iw_sample_q))
File "/root/.pyenv/versions/3.8.0/lib/python3.8/site-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/root/.pyenv/versions/3.8.0/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Function 'torch::jit::(anonymous namespace)::DifferentiableGraphBackward' returned nan values in its 1th output.
### Versions
PyTorch 1.8.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,862,418,655
|
MPS: Scaled Product Attention Passes Tensors with Inadmissible Size to `MetalPerformanceShadersGraph`, leading to Crashes
|
FabianSchuetze
|
closed
|
[
"high priority",
"triaged",
"module: regression",
"module: mps",
"module: sdpa"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The following code leads to a crash on MPS:
```python
import torch
import torch.nn.functional as F
device = torch.device('mps')
## To provoke the error, an non-continguous tensor needs to be created
q = torch.rand(3, 592, 4, 49, 32).to(device)
k = torch.rand(3, 592, 4, 49, 32).to(device)
v = torch.rand(3, 592, 4, 49, 32).to(device)
x = F.scaled_dot_product_attention(q, k, v)
```
The error is:
```
loc("mps_matmul"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":43:0)): error: incompatible dimensions
loc("mps_matmul"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":43:0)): error: invalid shape
```
The code run on cuda devices without error. The code also runs when a CPU backend is used.
Is a crash the best outcome here? The crash also happens when the env variable `PYTORCH_ENABLE_MPS_FALLBACK=1` is set. I would expect that hatch to be used in the case above.
### Versions
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.31.5
Libc version: N/A
Python version: 3.9.6 (default, Nov 11 2024, 03:15:38) [Clang 16.0.0 (clang-1600.0.26.6)] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] flake8==3.8.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] onnxscript==0.2.0
[pip3] torch==2.6.0
[pip3] torchvision==0.20.1
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,862,386,038
|
Use has_triton_package in _inductor.runtime.hints
|
dilililiwhy
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
Use existing method for triton check
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,862,306,475
|
Split test_transformers.py
|
Zhenbin-8
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Split test_transformers.py into test_transformers.py and test_transformers_privateuser1.py. Currently the privateuse1 test cases in test_transformers.py are skipped since they conflict with cuda test cases.
| true
|
2,862,304,237
|
Dynamo Unsupported: call_method UserDefinedObjectVariable(dict_items) __iter__ () {}
|
shink
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 8
|
CONTRIBUTOR
|
Repro:
```python
@torch.compile(backend="eager", fullgraph=True)
def f(x, items ):
it = iter(items)
return next(it), x.sin()
x = torch.randn(3)
dct = {'a': 3, 'b': 3}
res = f(x, dct.items())
print(res)
```
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,862,261,014
|
[fr][fix] Split MatchState and dynamic info for fr analysis downstream
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"suppress-api-compatibility-check",
"suppress-bc-linter"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147439
The original MatchState type was declared as a python Enum. Although we did make it callable but we consume it right away. There are downstream cases when we need it to be a python class which is not supported in Python enum. So we did a small refactoring so that we keep both the enum state and dynamic info (culprit) for the fr analysis script.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [D69830994](https://our.internmc.facebook.com/intern/diff/D69830994)
| true
|
2,862,212,443
|
About the trunk workflow triggered by the pull request
|
swgu98
|
closed
|
[] | 0
|
NONE
|
Why does a pull request (e.g. 147434) trigger a workflow trunk based on a push event?
| true
|
2,862,166,247
|
To avoid default eps ineffective in some precision(float16), dynamically set the default value of eps in optimizers such as Adam based on precision.
|
1994cxy
|
open
|
[
"module: optimizer",
"triaged",
"needs design"
] | 4
|
NONE
|
### 🚀 The feature, motivation and pitch
Recently, I have been developing my own model based on the BLIP-2 model, attempting to incorporate some new features into the input of the Q-former. Since the feature extraction network for the new features consumes GPU memory, and my GPU only has 80GB of memory, I used float16 precision to train my model. Additionally, I fixed all the parameters of the BLIP-2 model and only trained my new feature extraction network.
However, after calling optimizer.step() in the first iteration, I noticed that the parameters of the feature extraction network all became NaN. Initially, I thought it might be due to gradient explosion or an excessively large learning rate. However, I later found that these were not the issues. The problem lay in the second moment of the gradients, which were too small. This caused the sqrt(v_t) in the denominator of the Adam optimizer to become zero. Although PyTorch adds eps to the denominator to prevent division by zero, I discovered during debugging that the default eps=1e-8 does not work effectively under float16 precision.
I hope that future versions of PyTorch can automatically adjust related parameters based on the precision used for training the model, avoiding issues caused by eps becoming ineffective.
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,862,145,953
|
Fix c++ implementation of strip_function_call
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
#143063 was missing handling a couple UCS cases as well as had some bugs in the way it dealt with errors.
- Fix all the UCS handling (and make some of the common code more common)
- Make sure all the error paths return `nullptr`
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147436
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,862,096,260
|
[FlexAttention] Fix weird generate stride call in flex decode
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147435
# Summary
Seems like we had a redundant tuple unpack and that doesn't appear to be supported in new triton
Fixes https://github.com/pytorch/pytorch/issues/147373
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,862,091,192
|
[reland][cutlass backend] Do not change dtype of GEMM template for cutlass 3x
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Reland of https://github.com/pytorch/pytorch/pull/146877
incorporate forward fix (didn't land): https://github.com/pytorch/pytorch/pull/147185
Summary:
I think this is a change in the right direction.
Right now, when we try to find a cutlass gemm, we generate bunch of gemm templates, and filter out those that don't fix. For example, if we are doing bf16 x bf16 matmul, the gemm template for fp32 x fp32 is generated and filtered out.
However, for the dtype of bias, we would attempt to modify the dtype of the gemm template. I think this is a bad idea, since (1) the usable template is also being generated, and (2) this messes with the configuration name of the template.
I tested this offline. There isn't much difference in performance. However, with instantiation level 2222, I noticed way less "C++ compile error". This is probably due to using the right template?
Follow-ups are needed:
1. benchmark and dashboard
2. check our logic for setting alignment
with my change
https://www.internalfb.com/intern/paste/P1729604119/
without my change
https://www.internalfb.com/intern/paste/P1729624806/
Differential Revision: D69825865
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,862,081,557
|
[Inductor] Avoid tensor slice overflow for large step
|
DDEle
|
open
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Fixes #147071
Currently, if step is a value very close to INT64_MAX, the calculation of slice output length will overflow. This PR tries to fix this problem and thus fix #147071.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,862,064,372
|
Expose is_available API for torch.backends.mkldnn
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147432
As the title stated.
Like torch.backends.mkl, torch.backends.openmp and so on, they all expose
is_available API for users.
| true
|
2,862,048,491
|
xpu: torch.xpu.get_arch_list() to return [] if xpu not compiled
|
dvrogozh
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: improvements",
"topic: not user facing",
"ciflow/xpu",
"release notes: xpu"
] | 4
|
CONTRIBUTOR
|
Initially discussed here: https://github.com/pytorch/pytorch/pull/132945#discussion_r1957366131
Previously `torch.xpu.get_arch_list()` got relaxed to work even if XPU device is not available. However, we overlooked the case when pytorch is not compiled with XPU support. In such a case function throws an exception. This commit adjusts this behavior and makes function return `[]` even if pytorch is not compiled with XPU support.
CC: @EikanWang @fengyuan14 @guangyey @malfet @albanD
| true
|
2,862,016,908
|
[Quant] flip: throw runtime error for QUInt4x2 and QUInt2x4 input
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel"
] | 6
|
COLLABORATOR
|
Fixes #147208
**Summary**
The `flip` op causes memory corruption for `torch.quint4x2` and `torch.quint2x4` inputs. It is because the TensorIterator-based implementation does not support multiple elements per byte. And `torch.quint4x2` and `torch.quint2x4` are deprecated in PyTorch. So, we add a check here to throw a runtime error if input dtyps is `torch.quint4x2` or `torch.quint2x4`.
**Test plan**
```
pytest -s test/test_shape_ops.py -k test_flip_unsupported_dtype
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.