id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,769,714,249
|
[Inductor] Add convolution output size checking to the meta function
|
DDEle
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
Fixes #144013
Adding a size check to the meta function, similar to which in the CUDA/CPU aten op.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,769,674,242
|
[Quant][Inductor][X86] Separate binary post op fusion and lowering for qlinear
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144318
* #144312
* __->__ #144224
**Summary**
The current implementation fuses quantized ops and their post ops and lowers the fused op to cpp backend in the same pass. It is better to separate post op fusion and lowering because
- it looks better in terms of design
- we need the post op fusion pass for PT2E quantization eager mode
As one of a series of PRs which do the separation, this PR moves binary post op fusion of qlinear out of the lowering pass to after the weight-prepack pass. The workflow is
1. Weight prepack for qlinear so that `dq - linear` patterns are replaced by `onednn.qlinear_pointwise`
2. Fuse `onednn.qlinear_pointwise` and post ops
3. Lower to cpp backend
This PR adds additional `PatternMatcherPass`'s to handle the post op fusion. Pattern matchers used for fusion are reused.
**Test plan**
It is covered by existing UTs in `test_mkldnn_pattern_matcher.py` for post op fusion.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,769,668,541
|
Support LayerNorm2d
|
Luciennnnnnn
|
open
|
[
"module: nn",
"triaged"
] | 4
|
NONE
|
### 🚀 The feature, motivation and pitch
Hi, I want to use a layernorm with BCHW features, where normalization applied to every pixel, see [this](https://github.com/huggingface/pytorch-image-models/blob/131518c15cef20aa6cfe3c6831af3a1d0637e3d1/timm/layers/norm.py#L71) for a reference. Currently, we have to use 2 `reshape` ops in order utilizing fast layernorm implementation of pytorch.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,769,548,511
|
Enable clang-analyzer checks of Clang-tidy
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,769,523,676
|
[dynamo][dicts] Skip dict length guard
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144221
* #144165
* #143997
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,769,476,807
|
use `torch.special.xlogy` to implement `x_log_x`
|
randolf-scholz
|
closed
|
[
"module: distributions",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 15
|
CONTRIBUTOR
|
Fixes #144279
Using `x* x.log()` does not produce the correct value when `x=0`.
cc @fritzo @neerajprad @alicanb @nikitaved
| true
|
2,769,405,685
|
Full static typing for `torch.distributions`
|
randolf-scholz
|
open
|
[
"module: distributions",
"module: typing",
"triaged",
"open source",
"release notes: python_frontend"
] | 5
|
CONTRIBUTOR
|
Fixes #144196
Extends #144197 #144106 #144110
## Open Problems /// LSP violations
- [ ] `mixture_same_family.py`: `cdf` and `log_prob` violate LSP (argument named `x` instead of `value`).
- suggestion: Imo these kinds of methods should make use of positional-only parameters, at least in base classes.
- [ ] `exp_family.py`: LSP problem with `_log_normalizer` (parent class requires `(*natural_params: Tensor) -> Tensor`, subclasses implement `(a: Tensor, b: Tensor) -> Tensor`).
- suggestion: change parent class signature to `(natural_params: Tuple[Tensor, ...]) -> Tensor`. While this is BC breaking, (a) this is a private method, i.e. implementation detail, and (b) [no one other than torch seems to overwrite it](https://github.com/search?q=%22def+_log_normalizer%28self%2C+*%22&type=code&p=2)
- [ ] `constraints.py`: `dependent_property`: mypy does not apply the same special casing to subclasses of `property` as it does to `property` itself, hence the need for `type: ignore[assignment]` statements.
- affects: `relaxed_bernoulli.py`, `relaxed_categorical.py`, `logistic_normal.py`, `log_normal.py`, `kumaraswamy.py`, `half_cauchy.py`, `half_normal.py`, `inverse_gamma.py`, `gumbel.py`, `weibull.py`.
- suggestion: consider a construction similar to `lazy_property` in `distributions/utils`.
- [ ] `constraints.py` public interface not usable as type hints.
- More crisp design would likely have one class per constraints, instead of using a mix of classes and instances.
- suggestion: Add 1 class per constraint in the public interface; this can be subclasses of the existing ones.
- As a workaround, I currently added a bunch of `TypeAlias`-variants, but that is likely not the best solution.
- [ ] `transforms.py`: `_InverseTransform.with_cache` violates LSP.
- suggestion: change `with_cache` to return `_InverseTransform`.
- [ ] `test_distributions.py`: One test uses [`Dist.arg_constraints.get`](https://github.com/pytorch/pytorch/blob/3649e2e7bde8ff06f6fe6ac4168e879e9e4f5c0a/test/distributions/test_distributions.py#L6753), hence assumes `arg_constraints` is a class-attribute, but the base class `Distribution` defines it as a `@property`.
- [ ] `test_distributions.py`: One test uses [`Dist.support.event_dim`](https://github.com/pytorch/pytorch/blob/3649e2e7bde8ff06f6fe6ac4168e879e9e4f5c0a/test/distributions/test_distributions.py#L1431), hence assumes `support` is a class-attribute, but the base class `Distribution` defines it as a `@property`.
- [ ] `test_distributions.py`: Multiple tests use [`dist.cdf(float)`](https://github.com/pytorch/pytorch/blob/3649e2e7bde8ff06f6fe6ac4168e879e9e4f5c0a/test/distributions/test_distributions.py#L3517), but the base class annotates `cdf(Tensor) -> Tensor`.
- suggestion: replace float values with tensors in test, unless floats should be officially supported. Note that floats are nonsensical for [multivariate distributions](https://en.wikipedia.org/wiki/Cumulative_distribution_function#Multivariate_case), so supporting it would probably require introducing a subclass for univariate distributions.
- [ ] `test_distributions.py`: Multiple tests use [`dist.log_prob(float)`](https://github.com/pytorch/pytorch/blob/3649e2e7bde8ff06f6fe6ac4168e879e9e4f5c0a/test/distributions/test_distributions.py#L4066), but the base class annotates `log_prob(Tensor) -> Tensor`.
## Notes
- `__init__.py`: use `+=` instead of `extends` ([ruff PYI056](https://docs.astral.sh/ruff/rules/unsupported-method-call-on-all/))
- `binomial.py`: Allow `float` arguments in `probs` and `logits` (gets used in tests)
- `constraints.py`: made `_DependentProperty` a generic class, and `_DependentProperty.__call__` polymorphic.
- `constraint_registry.py`: Made `ConstraintRegistry.register` a polymorphic method, checking that the factory is compatible with the constraint.
- `constraint_registry.py`: Needed to add `type: ignore` comments to functions that try to register multiple different constraints at once.
- maybe split them up?
- `dirichlet.py`: `@once_differentiable` is untyped, requires `type: ignore[misc]` comment.
- `dirichlet.py`: `ctx: Any` could be replaced with `ctx: FunctionContext`, however, the type lacks the `saved_tensors` attribute.
- `distribution.py`: `Distribution._get_checked_instance` Accessing `"__init__"` on an instance is unsound, requires `type: ignore` comment.
- `distribution.py`: Changed `support` from `Optional[Constraint]` to `Constraint` (consistent with the existing docstring, and several functions in tests rely on this assumption)
- `exp_family.py`: small update to `ExponentialFamily.entropy` to fix type error.
- `independent.py`: fixed type bug in `Independent.support`.
- `multivariate_normal.py`: Added `type: ignore` comments to `_batch_mahalanobis` caused by[^1].
- `relaxed_bernoulli.py`: Allow float temperature argument (used in tests)
- `relaxed_categorical.py`: Allow float temperature argument (used in tests)
- `transforms.py`: Needed to change `ComposeTransform.__init__` signature to accept `Sequence[Transform]` rather than just `list[Transform]` (covariance!)
- `transformed_distribution.py`: Needed to change `TransformedDistribution.__init__` signature to accept `Sequence[Transform]` rather than just `list[Transform]` (covariance!)
- `transformed_distribution.py`: `TransformedDistribution.support` is problematic, because the parent class defines it as `@property` but several subclasses define it as an attribute, violating LSP.
- `von_mises.py`: fixed `result` type being initialized as `float` instead of `Tensor`.
- `von_mises.py`: `@torch.jit.script_if_tracing` is untyped, requires `type: ignore[misc]` comment.
- `von_mises.py`: Allow float `loc` and `scale` (used in tests)
[^1]: `torch.Size` is not correctly typed, causing `mypy` to think `Size + Size` is `tuple[int, ...]` instead of `Size`, see <https://github.com/pytorch/pytorch/issues/144218>.
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @malfet @xuzhao9 @gramster
| true
|
2,769,403,709
|
Improve static typing for `torch.Size`
|
randolf-scholz
|
closed
|
[
"module: typing",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
There are several issues with the current type hints for `torch.Size`:
https://github.com/pytorch/pytorch/blob/9f94710e48bfefc6a8e32af956e45d5a847c6467/torch/_C/__init__.pyi.in#L172-L180
It simply subclasses `tuple[int, ...]`, but this causes several typing errors, for example:
```python
import torch
x = torch.Size([1,2,3])
y = torch.Size([4,5,6])
reveal_type(x+y) # tuple[int, ...], not Size !!!
```
[This is because `tuple.__add__` is annotated to return `tuple`](https://github.com/python/typeshed/blob/9f28171658b9ca6c32a7cb93fbb99fc92b17858b/stdlib/builtins.pyi#L985-L988).
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @malfet @xuzhao9 @gramster
| true
|
2,769,387,753
|
PyTorch Enum PYI stubs are invalid. Fix typing of PYI stubs
|
Skylion007
|
open
|
[
"module: typing",
"module: lint",
"triaged"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
Updating mypy reveals that the way we have defined enums in our PYI stub is not valid. Specifically, we are not suppose to annotate the type of any values per https://typing.readthedocs.io/en/latest/spec/enums.html#defining-members . Therefore, we need to fix the typing and update it to a more modern typing system. See #143837 for some example errors.
### Versions
master as of today
cc @ezyang @malfet @xuzhao9 @gramster
| true
|
2,769,344,725
|
Update ddp.rst
|
bm777
|
closed
|
[
"open source"
] | 3
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,769,343,284
|
[dynamo][user-defined] Share _getattr_static between UserDefinedClass and Object VT
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144215
* #144173
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,769,333,069
|
Add overloads to diagonal docs
|
jackson-tsang578
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 10
|
CONTRIBUTOR
|
Fixes #126827. Refactored doc to demonstrate when none of the optional values are passed in. Added another example so that all overloads of the function are covered.
| true
|
2,769,328,227
|
[onnx] Fix bug for exporting torch.cdist into onnx and support 'compute_mode'
|
ygq65536
|
closed
|
[
"module: onnx",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 6
|
CONTRIBUTOR
|
### Fix bug for exporting torch.cdist and support 'compute_mode'
In [cdist,](https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset9.py#L6181) the 'compute_mode' was ignored, which leads to a big difference of the computation flow between original torch.cdist and the exported onnx file when computing Euclidean distance (p=2). For computing Euclidean distance, the running of exported onnx model will be 10x slower than running torch.cdist directly, and also very likely to cause CUDA OOM for larger matrixes unnecessarily.
This code is going for exporting the same onnx computation flow with the forward of torch.cdist defined at [forward implementation](https://github.com/pytorch/pytorch/blob/9225f149ebec1ff16d0ef31105ee8ecf4fb09efc/aten/src/ATen/native/Distance.cpp#L66-L149.) under every compute_mode.
Fixes #144212
| true
|
2,769,327,250
|
[onnx] export unexpected onnx for torch.cdist, which is slow and easy to cause CUDA OOM
|
ygq65536
|
closed
|
[
"module: onnx",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
In [cdist,](https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset9.py#L6181) the 'compute_mode' was ignored, which leads to a big difference of the computation flow between original torch.cdist and the exported onnx file when computing Euclidean distance (p=2). For computing Euclidean distance, the running of exported onnx model will be 10x slower than running torch.cdist directly, and also very likely to cause CUDA OOM for larger matrixes unnecessarily.
here is the code for replicate this:
```python
import torch
import onnxruntime as ort
import onnx
import torch.onnx
import numpy as np
import time
import ctypes
import onnxruntime as ort
# export LD_LIBRARY_PATH=/home/vipuser/miniconda3/envs/torch/lib/python3.12/site-packages/nvidia/cudnn/lib/:$LD_CLIBRARY_PATH
class Averager:
def __init__(self):
self.call_count = {}
self.total_sum = {}
def __call__(self, name, num):
if name not in self.call_count:
self.call_count[name] = 0
if name not in self.total_sum:
self.total_sum[name] = 0
# ignore first time
return
self.call_count[name] += 1 # Increment call count
self.total_sum[name] += num # Add result to total sum
def get(self, name):
assert name in self.call_count
assert name in self.total_sum
return self.total_sum[name]/self.call_count[name]
class Cdist(torch.nn.Module):
def forward(self, x, y, p=2.0, compute_mode="use_mm_for_euclid_dist_if_necessary"):
# type: (Tensor, Tensor, float, str) -> (Tensor)
r"""Computes batched the p-norm distance between each pair of the two collections of row vectors.
Args:
x1 (Tensor): input tensor of shape :math:`B \times P \times M`.
x2 (Tensor): input tensor of shape :math:`B \times R \times M`.
p: p value for the p-norm distance to calculate between each vector pair
:math:`\in [0, \infty]`.
compute_mode:
'use_mm_for_euclid_dist_if_necessary' - will use matrix multiplication approach to calculate
euclidean distance (p = 2) if P > 25 or R > 25
'use_mm_for_euclid_dist' - will always use matrix multiplication approach to calculate
euclidean distance (p = 2)
'donot_use_mm_for_euclid_dist' - will never use matrix multiplication approach to calculate
euclidean distance (p = 2)
Default: use_mm_for_euclid_dist_if_necessary.
If x1 has shape :math:`B \times P \times M` and x2 has shape :math:`B \times R \times M` then the
output will have shape :math:`B \times P \times R`.
This function is equivalent to `scipy.spatial.distance.cdist(input,'minkowski', p=p)`
if :math:`p \in (0, \infty)`. When :math:`p = 0` it is equivalent to
`scipy.spatial.distance.cdist(input, 'hamming') * M`. When :math:`p = \infty`, the closest
scipy function is `scipy.spatial.distance.cdist(xn, lambda x, y: np.abs(x - y).max())`.
"""
return torch.cdist(x, y, p, compute_mode)
def test(compute_mode = "use_mm_for_euclid_dist", p=2.0):
print(">>", compute_mode, p)
model = Cdist() # Instantiate the model
# Export the model to ONNX format
x = torch.randn(1, 640, 128, )
y = torch.randn(2, 1280, 128, )
torch.onnx.export(model,
(x, y, p, compute_mode), # Pass the two dummy inputs as a tuple
"cdist_.onnx", # Output ONNX file
input_names=["x", "y",], # Names for the input tensors
output_names=["output"], # Name for the output tensor
# dynamic_axes={"input1": {0: "batch_size"}, "input2": {0: "batch_size"}, "output": {0: "batch_size"}}, # Optional: dynamic batch size
opset_version=9) # Specify the ONNX opset version
inputs = [x, y]
names = ["x", "y"]
inputs_dict = {i:j for i, j in zip(names, inputs)}
start = time.time()
output_pytorch = model(x.cuda(), y.cuda(), p, compute_mode).cpu()
# print(output_pytorch.size(), output_pytorch[0][0][:3])
# print("torch model run for:", time.time()-start, "s")
inputs_dict = {i:j.numpy() for i, j in zip(names, inputs)}
onnx_model_path = "cdist_.onnx" # Replace with the path to your ONNX model
session = ort.InferenceSession(onnx_model_path, providers=['CUDAExecutionProvider'])
start = time.time()
onnx_output = session.run(["output"], inputs_dict)[0]
onnx_time_cost = time.time()-start
# print("onnx model run for:", onnx_time_cost, "s")
onnx_output_tensor = torch.from_numpy(onnx_output)
# print(onnx_output_tensor.size(), onnx_output_tensor[0][0][:3])
if not torch.allclose(output_pytorch, onnx_output_tensor, rtol=1e-2):
print("The outputs of the PyTorch model and the ONNX model are different!")
else:
print("ok")
return onnx_time_cost
if __name__=="__main__":
avg = Averager()
testtimes = 20
p = 2.0
for i in range(testtimes):
avg("donot_use_mm_for_euclid_dist", test("donot_use_mm_for_euclid_dist", p))
avg("use_mm_for_euclid_dist_if_necessary", test("use_mm_for_euclid_dist_if_necessary", p))
avg("use_mm_for_euclid_dist", test("use_mm_for_euclid_dist", p))
print("donot_use_mm_for_euclid_dist: average time:", avg.get("donot_use_mm_for_euclid_dist"))
print("use_mm_for_euclid_dist_if_necessary: average time:", avg.get("use_mm_for_euclid_dist_if_necessary"))
print("use_mm_for_euclid_dist: average time:", avg.get("use_mm_for_euclid_dist"))
```
Just compare the time cost between torch.cdist and exported cdist model at different mode will show the difference.
here is a result of the difference between three mode for the exported onnx model:
#### before fixing the bug, the time cost for onnx model is all the same.
```
donot_use_mm_for_euclid_dist: average time: 0.09469890594482422
use_mm_for_euclid_dist_if_necessary: average time: 0.08542945510462711
use_mm_for_euclid_dist: average time: 0.09019689810903449
```
#### after fixing the bug, the time cost for onnx model is 10x faster when using the accelerated computation method.
```
donot_use_mm_for_euclid_dist: average time: 0.10753977926153886
use_mm_for_euclid_dist_if_necessary: average time: 0.008870099720201995
use_mm_for_euclid_dist: average time: 0.011054039001464844
```
### Versions
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 40 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6133 CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 4
BogoMIPS: 4999.90
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat pku ospke md_clear arch_capabilities
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 16 MiB (4 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.20.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.0 pypi_0 pypi
[conda] torchaudio 2.5.0 pypi_0 pypi
[conda] torchvision 0.20.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
| true
|
2,769,284,213
|
`torch.nn.functional.one_hot` has inconsistent execution behavior under torch.compile
|
meetmul
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
### 🐛 Describe the bug
First of all, given the following input:
```python
a = torch.arange(0, 5) % 3 # [0,1,2,0,1]
a = a.to('cuda')
num_classes = 1
```
Directly calling `torch.nn.functional.one_hot` will throw a `RuntimeError`, which seems expected since the `max(a) > num_classes` (although the error message could be better refined)
```python
res = torch.nn.functional.one_hot(a,num_classes)
print(res)
------
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:365: operator(): block: [0,0,0], thread: [1,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:365: operator(): block: [0,0,0], thread: [2,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:365: operator(): block: [0,0,0], thread: [4,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
...
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
However, when calling `torch.nn.functional.one_hot` optimized by `torch.compile`, this optimized API will work normally without exceptions:
```python
res = torch.compile(torch.nn.functional.one_hot)(a,num_classes)
print(res)
------
tensor([[1],
[0],
[0],
[1],
[0]], device='cuda:0')
```
Then interesting thing happens, if I first call directly call `torch.nn.functional.one_hot`, then call the optimized `torch.nn.functional.one_hot`. This time, **the optimized API will then raise an Exception instead of working normally as it previously behaves**:
```python
torch.nn.functional.one_hot(a,num_classes)
res = torch.compile(torch.nn.functional.one_hot)(a,num_classes)
print(res)
------
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:365: operator(): block: [0,0,0], thread: [1,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:365: operator(): block: [0,0,0], thread: [2,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:365: operator(): block: [0,0,0], thread: [4,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
......
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### Versions
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
| true
|
2,769,248,572
|
Add inheritance possibilities
|
MaKaNu
|
open
|
[
"module: sparse",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
While inheritance of a normal tensor is possible, a more specialized tensor it seems not be able to achieve, since:
```
NotImplementedError: Cannot access storage of SparseTensorImpl
```
In the following forum post https://discuss.pytorch.org/t/how-to-inherit-sparse-tensor/214920 I described the scenario, where I wanted to extend a sparse_coo_tensor, to create a specialized sparse tensor
### Alternatives
The only alternative seems to be composition, which is not the right approach for something that only extends/limits capabilities of the original object.
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,769,186,823
|
Update torch-xpu-ops commit pin
|
EikanWang
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Update the torch-xpu-ops commit to [28cfac20ec662abdb0ac98faf122450013e8f520](https://github.com/intel/torch-xpu-ops/commit/28cfac20ec662abdb0ac98faf122450013e8f520), includes:
- Disable batch_norm vectorization path to fix accuracy issues.
- Fix the LSRM/RNN implementation error.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143984
Approved by: https://github.com/EikanWang, https://github.com/ruidazeng, https://github.com/desertfire, https://github.com/jansel
(cherry picked from commit 1e881ceecfe80532206ca4e0acb64391fab8b935)
cc @voznesenskym @penguinwu @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,769,166,419
|
Apply Ruff fixes and pyupgrade to torch/jit
|
cyyever
|
closed
|
[
"oncall: jit",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: jit",
"suppress-bc-linter"
] | 9
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,769,128,194
|
`pack_padded_sequence ` -> `pad_packed_sequence` can silently truncate input tensor when max length is smaller than actual max sequence length
|
brylee10
|
closed
|
[
"module: nn",
"module: rnn",
"triaged",
"actionable"
] | 2
|
NONE
|
### 🐛 Describe the bug
Running `pack_padded_sequence` -> `pad_packed_sequence` functions can silently truncates the input tensor when the largest value in `lengths` are smaller than the actual sequence lengths of the input tensor. This truncation occurs without any warning or error.
Expected Behavior: The `pack_padded_sequence` function should raise a warning or error if the the sequence length in the input tensor exceeds the maximum sequence length in `lengths`.
Potential Solution: A `warnings.warn()` message could be added to `pack_padded_sequence` if the length validation fails.
**Reproduction**
```python
import torch
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
input_tensor = torch.randn(3, 5, 3)
# Note: The first sequence has a length smaller than its actual length (4 > 3)
lengths = [4, 2, 3]
packed = pack_padded_sequence(input_tensor, lengths, batch_first=True, enforce_sorted=False)
unpacked, unpacked_lengths = pad_packed_sequence(packed, batch_first=True)
# Outputs: (3, 4, 3)
print("Unpacked Sequence Shape:", unpacked.shape)
# Outputs: [4, 2, 3]
print("Unpacked Lengths:", unpacked_lengths)
print("Original Sequence:", input_tensor)
# Note: the last sequence length inde has been truncated
print("Unpacked Sequence:", unpacked)
```
### Versions
PyTorch version: 2.5.1
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,769,121,939
|
Error when compiling document
|
aaronjie
|
open
|
[
"module: build",
"module: docs",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
Hi there. I tried to compile the doc using following command:
`
git clone https://github.com/pytorch/pytorch.git
cd pytorch/docs
pip install -r requirements.txt
make html
`
However, I encountered following errors:
`
Traceback (most recent call last):
File "/content/pytorch/docs/source/scripts/build_opsets.py", line 75, in <module>
main()
File "/content/pytorch/docs/source/scripts/build_opsets.py", line 58, in main
aten_ops_list = get_aten()
File "/content/pytorch/docs/source/scripts/build_opsets.py", line 20, in get_aten
parsed_yaml = parse_native_yaml(NATIVE_FUNCTION_YAML_PATH, TAGS_YAML_PATH)
File "/usr/local/lib/python3.10/dist-packages/torchgen/gen.py", line 241, in parse_native_yaml
_GLOBAL_PARSE_NATIVE_YAML_CACHE[path] = parse_native_yaml_struct(
File "/usr/local/lib/python3.10/dist-packages/torchgen/gen.py", line 180, in parse_native_yaml_struct
add_generated_native_functions(rs, bs)
File "/usr/local/lib/python3.10/dist-packages/torchgen/native_function_generation.py", line 479, in add_generated_native_functions
raise AssertionError(
AssertionError: Found an operator that we could not generate an out= variant for: _assert_tensor_metadata(Tensor a, SymInt[]? size=None, SymInt[]? stride=None, ScalarType? dtype=None, *, Device? device=None, Layout? layout=None) -> ().
This type of operators don't have tensor-like return, making it difficult to generate a proper out= variant. If
out= variant is not needed, please add the function name into FUNCTIONAL_OPS_THAT_CANNOT_GET_AN_OUT_VARIANT list.
make: *** [Makefile:25: opset] Error 1
`
Thank you.
### Versions
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 24353 100 24353 0 0 87236 0 --:--:-- --:--:-- --:--:-- 86975
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.6.0.74
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvtx==0.2.10
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] torch==2.5.1+cu121
[pip3] torchaudio==2.5.1+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu121
[conda] Could not collect
cc @malfet @seemethere @svekars @brycebortree @sekyondaMeta @AlannaBurke
| true
|
2,769,068,023
|
[5/N] Apply Ruff fixes and pyupgrade to Python 3.9
|
cyyever
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor",
"suppress-bc-linter"
] | 8
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad
| true
|
2,769,066,065
|
Apply Ruff fixes and pyupgrade
|
cyyever
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"release notes: fx",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,769,044,898
|
Cant’t find amdgpu.ids when installed in venv
|
Outssiss
|
closed
|
[
"needs reproduction",
"module: binaries",
"module: rocm",
"triaged"
] | 5
|
NONE
|
I installed python via pyenv (tried both 3.11.11 and 3.12.8), created a venv and then installed torch in that virtual environment, this throws the warning “amdgpu.ids: No such file or directory” two times in a row, I assume one for the integrated GPU and another one for the dedicated one.
But if instead I install torch directly on the global pyenv install this warning goes away and everything works as expected.
I then tried installing torch in the global instance and then also in the venv and noticed that this got rid of the warnings when running in the venv. Turns out that torch in the venv still looks for the amdgpu.ids file in the location of the global install (/home/josef/.pyenv/versions/3.12.8/lib/python3.12/site-packages/torch/share/libdrm/amdgpu.ids) instead that on the venv location (/home/josef/CodeThings/PytorchThingsEnv/.venv/lib/python3.12/site-packages/torch/share/libdrm/amdgpu.ids) and if I remove the file from the global location, the warnings come back.
Some information about the machine:
CPU: Ryzen 7 7700X
GPU: Radeon RX 7800 XT
ROCm version: 6.2
OS: Arch Linux
### Versions
Collecting environment information...
amdgpu.ids: No such file or directory
amdgpu.ids: No such file or directory
PyTorch version: 2.5.1+rocm6.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41133-dd7f95766
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.31.3
Libc version: glibc-2.40
Python version: 3.12.8 (main, Jan 5 2025, 00:50:31) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.12.6-arch1-1-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon Graphics (gfx1101)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41133
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 7700X 8-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 64%
CPU max MHz: 5573,0000
CPU min MHz: 545,0000
BogoMIPS: 8986,46
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] pytorch-triton-rocm==3.1.0
[pip3] torch==2.5.1+rocm6.2
[pip3] torchaudio==2.5.1+rocm6.2
[pip3] torchvision==0.20.1+rocm6.2
[conda] Could not collect
cc @seemethere @malfet @osalpekar @atalman @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,769,042,166
|
[ca] dedup node names when AOT bwd graph is reused multiple times
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 3
|
MEMBER
|
This error started popping up in HUD CA benchmarks:
```python
File "/data/users/xmfan/core/b/pytorch/torch/_dynamo/compiled_autograd.py", line 371, in dce
self.fx_tracer.graph.eliminate_dead_code(is_impure)
File "/data/users/xmfan/core/b/pytorch/torch/fx/graph.py", line 1862, in eliminate_dead_code
self.lint()
File "/data/users/xmfan/core/b/pytorch/torch/fx/graph.py", line 1753, in lint
raise RuntimeError(f"Node redefined name {node.name}!")
RuntimeError: Node redefined name aot0_expand!
```
We added CA initial capture's renaming (https://github.com/pytorch/pytorch/pull/133148) to help debug issues with AOT backward, but it errors out when we have multiple instances of the same AOT backward. This likely only showed up now because of increased hierarchical graph reuse. I fix it by adding a postfix counter to the node name
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144202
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,985,751
|
Update core.py to fix typo
|
jxmorris12
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
dype -> dtype
Fixes #ISSUE_NUMBER
| true
|
2,768,977,214
|
[BE][Ez]: Update CUDNN Frontend submodule to 1.9.0
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
* Update CUDNN Frontend to 1.9.0, which include some API improvements, new features, and bugfixes. This is a header only lib fix so should be pretty straight forward.
* Nicest feature is that it now logs / print warnings when the CUDNN compiled version does not match the dynamically loaded one
* Fixes corrupted / truncated log lines from being printed by CUDNN Frontend
| true
|
2,768,972,651
|
[mps/BE] Fix linter warning/advice.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps"
] | 3
|
MEMBER
|
Two spaces before an inline comment according to E261.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,768,953,718
|
[mps/BE] Enable a test that now passes.
|
dcci
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 9
|
MEMBER
|
After the implementation of floordiv in https://github.com/pytorch/pytorch/commit/464b50dbd7e0692970a2e34aac5b2eeb088741c6 landed, this now passes.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,952,636
|
[typing] Add type hints to `__init__` methods in `torch.distributions`.
|
randolf-scholz
|
closed
|
[
"module: distributions",
"module: typing",
"triaged",
"open source",
"Merged",
"Stale",
"release notes: python_frontend"
] | 19
|
CONTRIBUTOR
|
Fixes #144196
Extends #144106 and #144110
## Open Problems:
- [x] Annotating with `numbers.Number` is a bad idea, should consider using `float`, `SupportsFloat` or some `Procotol`. https://github.com/pytorch/pytorch/pull/144197#discussion_r1903324769
# Notes
- `beta.py`: needed to add `type: ignore` since `broadcast_all` is untyped.
- `categorical.py`: converted `else` branches of mutually exclusive arguments to `if` branch[^2].
- ~~`dirichlet.py`: replaced `axis` with `dim` arguments.~~ #144402
- `gemoetric.py`: converted `else` branches of mutually exclusive arguments to `if` branch[^2].
- ~~`independent.py`: fixed bug in `Independent.__init__` where `tuple[int, ...]` could be passed to `Distribution.__init__` instead of `torch.Size`.~~ **EDIT:** turns out the bug is related to typing of `torch.Size`. #144218
- `independent.py`: made `Independent` a generic class of its base distribution.
- `multivariate_normal.py`: converted `else` branches of mutually exclusive arguments to `if` branch[^2].
- `relaxed_bernoulli.py`: added class-level type hint for `base_dist`.
- `relaxed_categorical.py`: added class-level type hint for `base_dist`.
- ~~`transforms.py`: Added missing argument to docstring of `ReshapeTransform`~~ #144401
- ~~`transforms.py`: Fixed bug in `AffineTransform.sign` (could return `Tensor` instead of `int`).~~ #144400
- `transforms.py`: Added `type: ignore` comments to `AffineTransform.log_abs_det_jacobian`[^1]; replaced `torch.abs(scale)` with `scale.abs()`.
- `transforms.py`: Added `type: ignore` comments to `AffineTransform.__eq__`[^1].
- `transforms.py`: Fixed type hint on `CumulativeDistributionTransform.domain`. Note that this is still an LSP violation, because `Transform.domain` is defined as `Constraint`, but `Distribution.domain` is defined as `Optional[Constraint]`.
- skipped: `constraints.py`, `constraints_registry.py`, `kl.py`, `utils.py`, `exp_family.py`, `__init__.py`.
## Remark
`TransformedDistribution`: `__init__` uses the check `if reinterpreted_batch_ndims > 0:`, which can lead to the creation of `Independent` distributions with only 1 component. This results in awkward code like `base_dist.base_dist` in `LogisticNormal`.
```python
import torch
from torch.distributions import *
b1 = Normal(torch.tensor([0.0]), torch.tensor([1.0]))
b2 = MultivariateNormal(torch.tensor([0.0]), torch.eye(1))
t = StickBreakingTransform()
d1 = TransformedDistribution(b1, t)
d2 = TransformedDistribution(b2, t)
print(d1.base_dist) # Independent with 1 dimension
print(d2.base_dist) # MultivariateNormal
```
One could consider changing this to `if reinterpreted_batch_ndims > 1:`.
[^1]: Usage of `isinstance(value, numbers.Real)` leads to problems with static typing, as the `numbers` module is not supported by `mypy` (see <https://github.com/python/mypy/issues/3186>). This results in us having to add type-ignore comments in several places
[^2]: Otherwise, we would have to add a bunch of `type: ignore` comments to make `mypy` happy, as it isn't able to perform the type narrowing. Ideally, such code should be replaced with structural pattern matching once support for Python 3.9 is dropped.
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @malfet @xuzhao9 @gramster
| true
|
2,768,951,708
|
[typing] Add static type hints to `torch.distributions`.
|
randolf-scholz
|
closed
|
[
"module: distributions",
"module: typing",
"triaged"
] | 7
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Current lack of type hints causes some issues, for instance #76772
- [x] Add type hints for `lazy_property` class (#144110)
- [x] Add type hints for `@property` and `@lazy_property` (#144110)
- [x] Add type hints to `__init__` methods (#144197)
- [ ] Add type hints to methods (#144219)
- [ ] Add type hints to attributes (#144219)
- [ ] Add type hints to utility functions (#144219)
### Alternatives
_No response_
### Additional context
_No response_
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @malfet @xuzhao9 @gramster
| true
|
2,768,951,494
|
[mps/inductor] Add support for floor().
|
dcci
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mps",
"module: inductor",
"ciflow/inductor"
] | 7
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,937,877
|
[EZ][BE] Cleanup `test_mps_basic`
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
- Sort imported tests alphabetically
- Run `add` tests with `check_lowp=False` as it is tested explicitly by parametrization
- Do not hardcode device, but rather use `self.device` property
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,916,380
|
Cholesky mps implementation
|
Isalia20
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 39
|
COLLABORATOR
|
Requested in #77764
PR is still in draft because it needs some cleanups and optimizations to get to cpu performance the least. Tasks:
- [x] Make `upper=True` work, only `upper=False` works now
- [x] Code cleanup
- [x] Optimizations(Though might need some help on this)(tried my best, maybe there is still some more to squeeze out)
- [x] Checks for positive definite input
- [x] Support for (*, N, N) input, currently only supports (B, N, N) input
- [x] Support other dtypes(float16, bfloat16)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,892,573
|
Add support for cpu scalar in addcdiv
|
EmmettBicker
|
closed
|
[
"triaged",
"open source",
"Stale",
"release notes: python_frontend"
] | 5
|
CONTRIBUTOR
|
Fixes #143306
Adds support for CPU scalar for tensor_2 in addcdiv. For example:
```
import torch
a = torch.rand(2, 2, device="cuda")
b = torch.tensor(1e-3)
torch.add(a, b)
torch.addcdiv(a, a, b) # used to fail, now works
```
Nearly identical to #143264
| true
|
2,768,845,758
|
c10::optional -> std::optional in a few places
|
jeanschmidt
|
closed
|
[
"fb-exported",
"release notes: cpp"
] | 45
|
CONTRIBUTOR
|
Test Plan: Sandcastle
Reviewed By: malfet
Differential Revision: D67816636
| true
|
2,768,832,048
|
'>>' operator broken on MPS with uint8, pytorch 2.5.1 and nightlies
|
Vargol
|
closed
|
[
"triaged",
"module: correctness (silent)",
"module: mps"
] | 4
|
NONE
|
### 🐛 Describe the bug
rshift is not working on MPS for unit8, it gives the wrong result. It works for int32, uint8 is used throughout GGUF for quantisation which is getting more and more important.
```py
>>> import torch
>>> x = torch.Tensor([255]).to(torch.uint8)
>>> print (x)
tensor([255], dtype=torch.uint8)
>>> x >> 4
tensor([15], dtype=torch.uint8)
>>>
>>> print (x)
tensor([255], dtype=torch.uint8)
>>> y = x.to('mps')
>>> print (y)
tensor([255], device='mps:0', dtype=torch.uint8)
>>>
>>> y >> 4
tensor([255], device='mps:0', dtype=torch.uint8)
```
Note int32 works
>>> x = torch.Tensor([255]).to(torch.int32)
>>> y = x.to('mps')
>>> y >> 4
tensor([15], device='mps:0', dtype=torch.int32)
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241226
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.29.5
Libc version: N/A
Python version: 3.11.10 (main, Sep 7 2024, 08:05:54) [Clang 16.0.0 (clang-1600.0.26.3)] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.6.0.dev20241226
[pip3] torchao==0.7.0
[pip3] torchvideo==0.0.0
[pip3] torchvision==0.22.0.dev20241226
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,768,785,557
|
Update ruff and black to use Python 3.9 as a baseline
|
cyyever
|
closed
|
[
"open source",
"Stale",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,768,680,836
|
BackendCompilerFailed error is raised when applying torch.compile on torch.fill
|
maybeLee
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When compiling torch.fill in cuda environment, the compiled function will raise `BackendCompilerFailed` error when input is an `uint` tensor. It seems that this issue is caused by invalid argument type when using triton's API `tl.full`.
Note that directly calling torch.fill with uint tensor does not lead to any exception.
Here is the code to reproduce:
```
import torch
f = torch.fill
cf = torch.compile(f)
input = torch.randn(1,2,3).to(torch.uint32).to('cuda')
value = -90
print(f(input,value)) # tensor([[[4294967206, 4294967206, 4294967206],[4294967206, 4294967206, 4294967206]]], device='cuda:0',dtype=torch.uint32)
cf_out = cf(input,value)
```
Although the traceback is super long, the error message seems straightforward:
```
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] Triton compilation failed: triton_poi_fused_fill_0
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] def triton_poi_fused_fill_0(out_ptr0, xnumel, XBLOCK : tl.constexpr):
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xnumel = 6
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xoffset = tl.program_id(0) * XBLOCK
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xmask = xindex < xnumel
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] x0 = xindex
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] tmp0 = tl.full([1], -90, tl.uint32)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] tl.store(out_ptr0 + (x0), tmp0, xmask)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] metadata: {'signature': {'out_ptr0': '*u32', 'xnumel': 'i32'}, 'device': 0, 'constants': {'XBLOCK': 8}, 'configs': [AttrsDescriptor(divisible_by_16=(0,), equal_to_1=())], 'device_type': 'cuda', 'num_warps': 1, 'num_stages': 1, 'debug': True, 'cc': 86}
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] Traceback (most recent call last):
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/language/core.py", line 35, in wrapper
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] return fn(*args, **kwargs)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/language/core.py", line 1223, in full
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] return semantic.full(shape, value, dtype, _builder)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/language/semantic.py", line 530, in full
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] value = get_value_fn(value)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] TypeError: get_uint32(): incompatible function arguments. The following argument types are supported:
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] 1. (self: triton._C.libtriton.ir.builder, arg0: int) -> triton._C.libtriton.ir.value
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] Invoked with: <triton._C.libtriton.ir.builder object at 0x7fca9840ecf0>, -90
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] The above exception was the direct cause of the following exception:
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] Traceback (most recent call last):
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 532, in _precompile_config
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] binary = triton.compile(*compile_args, **compile_kwargs)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/compiler/compiler.py", line 276, in compile
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] module = src.make_ir(options, codegen_fns, context)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] File "/opt/conda/lib/python3.11/site-packages/triton/compiler/compiler.py", line 113, in make_ir
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] triton.compiler.errors.CompilationError: at 7:11:
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] def triton_poi_fused_fill_0(out_ptr0, xnumel, XBLOCK : tl.constexpr):
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xnumel = 6
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xoffset = tl.program_id(0) * XBLOCK
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xindex = xoffset + tl.arange(0, XBLOCK)[:]
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] xmask = xindex < xnumel
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] x0 = xindex
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] tmp0 = tl.full([1], -90, tl.uint32)
E0104 08:55:10.835000 220442 site-packages/torch/_inductor/runtime/triton_heuristics.py:534] [0/0] ^
Traceback (most recent call last):
File "/root/try.py", line 7, in <module>
cf_out = cf(input,value)
^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1404, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1188, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1005, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 733, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 768, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1402, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1076, in run
while self.step():
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 986, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3086, in RETURN_VALUE
self._return(inst)
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3071, in _return
self.output.compile_subgraph(
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1107, in compile_subgraph
self.compile_and_call_fx_graph(
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1390, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1440, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1493, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1472, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1886, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1170, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1155, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 582, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 832, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 201, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 491, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1764, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 575, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 689, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1132, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1047, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1978, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/graph.py", line 2019, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 2769, in load_by_key_path
mod = _reload_python_module(key, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/runtime/compile_tasks.py", line 46, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_root/iy/ciyo7ytism5a7pcooebqj3wuyepsq3xgyjogs2vpfqkhrxqs2l5w.py", line 48, in <module>
triton_poi_fused_fill_0 = async_compile.triton('triton_poi_fused_fill_0', '''
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/async_compile.py", line 214, in triton
kernel.precompile()
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 302, in precompile
compiled_binary, launcher = self._precompile_config(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 532, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/triton/compiler/compiler.py", line 276, in compile
module = src.make_ir(options, codegen_fns, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/triton/compiler/compiler.py", line 113, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CompilationError: at 7:11:
def triton_poi_fused_fill_0(out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 6
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x0 = xindex
tmp0 = tl.full([1], -90, tl.uint32)
^
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gite15442a
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gite15442a
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gite15442a pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng
| true
|
2,768,542,491
|
Wrong meta function for constant_pad_nd
|
ywq880611
|
open
|
[
"triaged",
"oncall: pt2",
"module: decompositions",
"module: inductor"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When I'm working on this [PR](https://github.com/pytorch/pytorch/pull/140399), I meet a test failed case `python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_fft_hfftn_cuda_float16`, it's related to `constant_pad_nd` op.
## Summary
I found there is two potential dispatched methods for `constant_pad_nd` when run it directly with `meta` device and run it in `inductor` and they didn't return the same tensor.
## Code
1. Directly run it with `meta` device:
```
import torch
a = torch.empty_strided((2, 4, 5), (20, 1, 4), dtype=torch.complex128, device='meta')
print(a.shape)
print(a.stride())
b = torch.constant_pad_nd(a, [0, 0, 0, -2, 0, 0])
print(b.shape)
print(b.stride())
```
If we run the above code, we could see for any of `meta`, `cuda` and `cpu` device, it will print:
```
torch.Size([2, 4, 5])
(20, 1, 4)
torch.Size([2, 2, 5])
(10, 1, 2)
```
So the `meta` device matches the `cpu` and `cuda` device, it do **meet our expectation**.
2. torch.compile:
We run `constant_pad_nd` with same arguments but run it with compiled.
```
import torch
a = torch.empty_strided((2, 4, 5), (20, 1, 4), dtype=torch.complex128, device='cuda')
def foo(a):
b = torch.constant_pad_nd(a, [0, 0, 0, -2, 0, 0])
return b
new_foo = torch.compile(foo)
new_foo(a)
```
We could try to print the graph of ir:

We could see in the pink box, the stride of `constant_pad_nd` is **`(10, 5, 1)`**, it's **not aligned** with the stride above **`(10, 1, 2)`**, so we could image there is mismatch in the meta function for directly run and run it in `inductor`.
## Analysis
### call stack
I tried to debug the two different behavior, I found both of the two approaches will go to:
https://github.com/pytorch/pytorch/blob/816328fa51382e9b50e60fb928a690d5c1bdadaf/torch/_prims_common/wrappers.py#L290-L291
but they may have different `fn`
1. Directly run it with `meta` device:
it will go to some function called `empty_like`:
https://github.com/pytorch/pytorch/blob/816328fa51382e9b50e60fb928a690d5c1bdadaf/torch/_refs/__init__.py#L4972-L5010
In this method, it will keep the output's stride aligned with the `cpu` or `cuda` device.
2. torch.compile:
It will go to some function called `constant_pad_nd`:
https://github.com/pytorch/pytorch/blob/816328fa51382e9b50e60fb928a690d5c1bdadaf/torch/_refs/__init__.py#L2901-L2981
In this method, it seems it didn't keep the stride align with the `cpu` or `cuda` device.
### my guess
I suspected it was caused when we initialize meta function for each op, because it seems we didn't have a meta function for `constant_pad_nd`, so pytorch skip some setup here:
https://github.com/pytorch/pytorch/blob/816328fa51382e9b50e60fb928a690d5c1bdadaf/torch/_meta_registrations.py#L6921-L6926
## Potential solution
I'm not very clear if it's an expected behavior, if it's not, there are two potential solution in my mind:
1. Ask `inductor` to call the `empty_like` in the above.
2. Introduce a meta function `meta_constant_pad_nd` for this op and make it run in both of the two places mentioned above.
WDYT? Any help would be very appreciated! Thank you!
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0a0+gitf3ec745
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~22.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7219.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (12 instances)
L1i cache: 512 KiB (12 instances)
L2 cache: 12 MiB (9 instances)
L3 cache: 25 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.1
[pip3] torch==2.6.0a0+gitf3ec745
[pip3] triton==3.1.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0a0+gitf3ec745 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @chauhang @penguinwu @SherlockNoMad @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng
| true
|
2,768,512,251
|
[inductor] [cpu] `sum-softmax` throw `AssertionError: buf1`
|
shaoyuyoung
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: when using `.sum` and `torch.nn.Softmax` and setting their `dim=-1`, CPU inductor would throw an assertion error.
**device**: only CPU. triton backend outputs correctly.
I am not sure whether it is my CPU problem. I have run this on colab and got the same result.
```python
import torch
torch.manual_seed(0)
torch.set_grad_enabled(False)
from torch._inductor import config
config.fallback_random = True
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.softmax = torch.nn.Softmax(dim=-1)
def forward(self, x):
x = x.sum(dim=-1)
x = self.softmax(x)
return x
model = Model()
x = torch.randn(8, 8, 2)
inputs = [x]
try:
output = model(*inputs)
print("succeed on eager")
except Exception as e:
print(e)
try:
c_model = torch.compile(model)
c_output = c_model(*inputs)
print("succeed on inductor")
except Exception as e:
print(e)
```
### Error logs
```
succeed on eager
C0104 14:26:37.238000 836437 site-packages/torch/_inductor/scheduler.py:1140] [0/0] Error in codegen for ComputedBuffer(name='buf3', layout=FixedLayout('cpu', torch.float32, size=[8, 8], stride=[8, 1]), data=Pointwise(device=device(type='cpu'), dtype=torch.float32, inner_fn=<function make_pointwise.<locals>.inner.<locals>.inner_fn at 0x7fd288683740>, ranges=[8, 8]))
AssertionError: buf1
```
### Versions
PyTorch version: 20241230
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu
| true
|
2,768,392,600
|
Fix annotate_getitem_nodes for new type annotations
|
cyyever
|
closed
|
[
"open source",
"release notes: fx",
"fx"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,768,385,251
|
[dynamo][user-defined] Shared getattr_static_helper between UserDefinedClass and UserDefinedObject
|
anijain2305
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144184
* #144173
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,768,382,668
|
[inductor] `index_copy` looses the shape check on inductor
|
shaoyuyoung
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: eager would throw the error when the shape of x and y is different. But inductor seems to do a special processing.... It broadcasts y to the same shape as x.
**device**: both cuda and CPU
```python
import torch
import torch.nn as nn
torch.manual_seed(0)
torch.set_grad_enabled(False)
from torch._inductor import config
config.fallback_random = True
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x, y, indices):
x = torch.index_copy(x, 0, indices, y)
return x
model = Model().cuda()
x = torch.randn(1, 2).cuda()
y = torch.randn(1, 1).cuda()
indices = torch.tensor([0]).cuda()
print(x)
print(y)
inputs = [x, y, indices]
try:
model(*inputs)
except Exception as e:
print(f"fail on eager: {e}")
try:
c_model = torch.compile(model)
output = c_model(*inputs)
print(output)
except Exception as e:
print(f"fail on inductor: {e}")
```
### Error logs
```
tensor([[ 1.5410, -0.2934]], device='cuda:0')
tensor([[-2.1788]], device='cuda:0')
fail on eager: index_copy_(): Source/destination tensor must have same slice shapes. Destination slice shape: 2 at dimension 0 and source slice shape: 1 at dimension 0.
tensor([[-2.1788, -2.1788]], device='cuda:0')
```
### Versions
PyTorch version: 20241230
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,768,382,414
|
Fix ruff warnings in caffe2 and functorch
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
In preparation for upgrading ruff config to py3.9.
| true
|
2,768,355,936
|
Memory leak trying to average disc/gen losses during HiFiGAN training
|
AznamirWoW
|
closed
|
[] | 3
|
NONE
|
### 🐛 Describe the bug
Using ```epoch_disc_sum += loss_disc``` and ```epoch_gen_sum += loss_gen_all``` in the code below eats ~3MB/step RAM, so with a train 3500 steps long it consumes ~10GB by the end of the epoch, then the allocated RAM value drops down (triggered by torch.cuda.empty_cache() perhaps)
``` epoch_disc_sum = 0.0
epoch_gen_sum = 0.0
for batch_idx, info in data_iterator:
phone, pitch, spec, wave = info
# Forward pass
use_amp = config.train.fp16_run and device.type == "cuda"
with autocast(enabled=use_amp):
model_output = net_g(phone, pitch, spec)
y_hat, x_mask, z_mask, (z, z_p, m_p, logs_p, m_q, logs_q) = (model_output)
y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach())
with autocast(enabled=False):
loss_disc, _, _ = discriminator_loss(y_d_hat_r, y_d_hat_g)
# Discriminator backward and update
epoch_disc_sum += loss_disc #.item()
optim_d.zero_grad()
scaler.scale(loss_disc).backward()
scaler.unscale_(optim_d)
grad_norm_d = torch.nn.utils.clip_grad_norm_(net_d.parameters(), max_norm=1000.0)
scaler.step(optim_d)
scaler.update()
# Generator backward and update
with autocast(enabled=use_amp):
_, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat)
with autocast(enabled=False):
loss_mel = fn_mel_loss(wave, y_hat) * config.train.c_mel / 3.0
loss_env = envelope_loss(wave, y_hat)
loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * config.train.c_kl
loss_fm = feature_loss(fmap_r, fmap_g)
loss_gen, _ = generator_loss(y_d_hat_g)
loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl + loss_env
epoch_gen_sum += loss_gen_all #.item()
optim_g.zero_grad()
scaler.scale(loss_gen_all).backward()
scaler.unscale_(optim_g)
grad_norm_g = torch.nn.utils.clip_grad_norm_(net_g.parameters(), max_norm=1000.0)
scaler.step(optim_g)
scaler.update()
with torch.no_grad():
torch.cuda.empty_cache()
#avg_losses["disc_loss_queue"].append(epoch_disc_sum / len(train_loader))
#avg_losses["gen_loss_queue"].append(epoch_gen_sum / len(train_loader)) ```
### Versions
PyTorch version: 2.3.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Microsoft Windows 10 Pro (10.0.19045 64-bit) GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.10.16 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:19:12) [MSC v.1929 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.19045-SP0 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Ti SUPER Nvidia driver version: 566.36 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Name: AMD Ryzen 5 7600X 6-Core Processor Manufacturer: AuthenticAMD Family: 107 Architecture: 9 ProcessorType: 3 DeviceID: CPU0 CurrentClockSpeed: 4701 MaxClockSpeed: 4701 L2CacheSize: 6144 L2CacheSpeed: None Revision: 24834 Versions of relevant libraries: [pip3] numpy==1.23.5 [pip3] torch==2.3.1+cu121 [pip3] torchaudio==2.3.1+cu121 [pip3] torchcrepe==0.0.23 [pip3] torchfcpe==0.0.4 [pip3] torchvision==0.18.1+cu121 [conda] Could not collect
| true
|
2,768,345,649
|
[Submodule] Upgrade to Cutlass 3.6
|
drisspg
|
closed
|
[
"module: cuda",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: sparse",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144180
cc @ptrblck @msaroufim @eqy
| true
|
2,768,345,086
|
Add fallbacks for aten._assert_tensor_metadata in inductor lowering
|
yushangdi
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Summary: We are adding `aten._assert_tensor_metadata` op in exported graphs in https://github.com/pytorch/pytorch/pull/142420, and we need an explicit fallback when `config.implicit_fallbacks = False`.
Test Plan:
```
buck2 run mode/dev-nosan fbcode//caffe2/test/inductor:test_aot_inductor -- -r assert_tensor_meta
```
Differential Revision: D67817861
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,329,243
|
c10::string_view -> std::string_view in Device.cpp
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements"
] | 4
|
CONTRIBUTOR
|
Test Plan: Sandcastle
Differential Revision: D67817163
| true
|
2,768,328,235
|
c10::string_view -> std::string_view in torchgen
|
r-barnes
|
closed
|
[
"fb-exported",
"Stale",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Test Plan: Sandcastle
Differential Revision: D67817109
| true
|
2,768,313,729
|
[inductor] Avoid specializing over symbolic value during constant folding
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144176
Fixes #143667. See more context in the issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,299,353
|
Compiled autograd fails due to `AttributeError: 'NoneType' object has no attribute 'bw_module'`
|
akihironitta
|
closed
|
[
"triaged",
"oncall: pt2",
"module: compiled autograd"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Tried to use Compiled Autograd with this patch https://github.com/pyg-team/pytorch-frame/commit/a3df884148daf3f1983685aff926166af37a371a on nightly, following the tutorial https://pytorch.org/tutorials/intermediate/compiled_autograd_tutorial.html, however, it fails due to `AttributeError: 'NoneType' object has no attribute 'bw_module'`.
### Error logs
```console
git clone -b akihironitta/compiled-autograd https://github.com/pyg-team/pytorch-frame.git
cd pytorch-frame
pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu126
pip install -e .
python examples/trompt.py --compile
```
```
Epoch 1: 0%| | 0/57 [00:04<?, ?it/s]
Traceback (most recent call last):
File "/home/aki/work/github.com/pyg-team/pytorch-frame/examples/trompt.py", line 157, in <module>
train_loss = train(epoch)
File "/home/aki/work/github.com/pyg-team/pytorch-frame/examples/trompt.py", line 126, in train
loss.backward()
File "/home/aki/miniconda3/envs/pyf-pt-nightly/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/home/aki/miniconda3/envs/pyf-pt-nightly/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/home/aki/miniconda3/envs/pyf-pt-nightly/lib/python3.10/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/aki/miniconda3/envs/pyf-pt-nightly/lib/python3.10/site-packages/torch/_dynamo/compiled_autograd.py", line 819, in set_node_origin
"aot_gm": forward_cls._lazy_backward_info.bw_module,
AttributeError: 'NoneType' object has no attribute 'bw_module'
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241231+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1055-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 545.23.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7R13 Processor
Stepping: 1
CPU MHz: 2649.998
BogoMIPS: 5299.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 4 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-frame==0.2.3
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241231+cu126
[conda] numpy 2.2.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-frame 0.2.3 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241231+cu126 pypi_0 pypi
```
cc @chauhang @penguinwu @xmfan @yf225
| true
|
2,768,281,733
|
[dynamo][user-defined] Use __getattribute__ for getsetdescriptor lookups
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144184
* __->__ #144174
* #144173
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,768,279,561
|
[dynamo][user-defined] Remove __getattribute__ checks and add getsetdescriptor
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144184
* __->__ #144173
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,768,275,133
|
Refactor FxGraphDrawer to use HTML-like labels (#137726) (#137726)
|
exclamaforte
|
closed
|
[
"fb-exported",
"Stale",
"ciflow/trunk",
"release notes: releng",
"fx",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary:
Fixes https://github.com/pytorch/pytorch/issues/137499
Testing: Added a new unit test to make sure that the regression case succeeds.
I'm debating about whether to make the borders visible. I'm partial to no borders, but it might make it harder for some people to read?

Vs.

Approved by: https://github.com/eellison, https://github.com/malfet
Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/1e738420296a84406cd0a1626074ea6447a6603a
Reviewed By: jingsh, dulinriley
Differential Revision: D65460378
Pulled By: exclamaforte
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,768,267,438
|
[dtensor] deprecate _shard_tensor to use src_data_rank=None
|
wanchaol
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144171
as titled, we can achieve no comm sharding for the inference case with
src_data_rank=None, so deprecate the private APi
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,768,260,704
|
[MPS] Fix `nllnd_loss_backward` crash with different dtypes
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* __->__ #144170
* #144084
* #144083
* #144162
* #144167
Otherwise, invoking with torch.half inputs, but float weights will result in
```
(mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:233:0: error: 'mps.divide' op requires the same element type for all operands and results
(mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:233:0: note: see current operation: %16 = "mps.divide"(%15, %arg2) : (tensor<5x5xf16>, tensor<1xf32>) -> tensor<*xf32>
(mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:233:0: error: 'mps.divide' op requires the same element type for all operands and results
(mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:233:0: note: see current operation: %16 = "mps.divide"(%15, %arg2) : (tensor<5x5xf16>, tensor<1xf32>) -> tensor<*xf32>
2025-01-03 14:13:18.747151-0800 python[87772:4027380] /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm, line 975: error 'original module failed verification'
/AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:975: failed assertion `original module failed verification'
```
Test plan: `python -mpytest test/inductor/test_torchinductor.py -k test_nll_loss_backward_mps` should not crash
| true
|
2,768,244,186
|
[mps/inductor] Add support for log().
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 5
|
MEMBER
|
Tested via:
```
% pytest test/inductor/test_mps_basic.py
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,241,947
|
detect fake mode in proxy_tensor creation in make_fx
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary:
Fixes https://github.com/pytorch/pytorch/issues/143742
A FakeTensorMode may already exist when we are setting the "val" meta of a proxy tensor. We should detect existing FakeTensorMode before creating a new one.
Otherwise, we could cause an error when using `detect_fake_mode` later, because there are now multiple FakeTensorModes existing.
Test Plan: The error in https://github.com/pytorch/pytorch/issues/143742
Differential Revision: D67813111
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,768,221,425
|
[MPSInductor] Extend `constant` to bool type
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144170
* #144084
* #144083
* #144162
* __->__ #144167
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,214,438
|
[dtensor] move all tests to distribute/tensor folder
|
wanchaol
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: releng",
"fx",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144166
as titled, mainly moving files
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,186,248
|
[dynamo][easy] Move dict tests to test_dicts.py
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144342
* __->__ #144165
* #143997
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,768,147,221
|
Make whl metadata public readable
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default"
] | 3
|
CONTRIBUTOR
|
After https://github.com/pytorch/pytorch/pull/143677/files#r1902138480 lands, the new nightly wheel metadata is not readable publicly causing pip install to fail, for example https://github.com/pytorch/pytorch/actions/runs/12603415308/job/35128414909.
FBGEMM folks are also noticed this failure on their end (cc @q10)
| true
|
2,768,116,315
|
[dynamo] Trace torch.typename
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144165
* #143997
* #144160
* __->__ #144163
* #144158
* #144141
* #144130
* #144129
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,768,080,704
|
[MPSInductor] Add `remainder` op
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144170
* #144084
* #144083
* __->__ #144162
* #144167
For it to return correct result for half precision type it must be upcast to float
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,062,489
|
[BE] typing for decorators
|
aorenste
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"fx",
"module: inductor",
"ciflow/inductor",
"release notes: AO frontend"
] | 9
|
CONTRIBUTOR
|
Summary:
Untyped decorators strip annotations from the decorated items.
- _compile
- _inductor/fx_passes/post_grad
- _inductor/lowering
- _library/custom_ops
- _meta_registrations
- _ops
- _refs/nn/functional
- ao/quantization/quantizer/xnnpack_quantizer_utils
- distributed/_composable/contract
- fx/experimental/graph_gradual_typechecker
- fx/experimental/migrate_gradual_types/constraint_generator
- optim/optimizer
- signal/windows/windows
- testing/_internal/common_device_type
- torch/_inductor/decomposition
- utils/flop_counter
Test Plan: unit tests
Differential Revision: D62302684
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,768,050,084
|
[dynamo][lazy] LazyVT utils to get original value/source and is_hashable
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144165
* #143997
* __->__ #144160
* #144163
* #144158
* #144141
* #144130
* #144129
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,768,031,584
|
Allow generic python data structure input for torch.autograd.Function
|
nmerrillq
|
open
|
[
"module: autograd",
"triaged",
"needs research",
"has workaround"
] | 4
|
NONE
|
### 🚀 The feature, motivation and pitch
I have a custom C++ function that takes in dicts of inputs which can contain different tensors depending on the mode and returns the gradients in the backwards pass. Currently, torch.autograd.Function does not support dict input. It would be nice if it could support dict/list/tuple as input and traverse the input internally and allow the output gradients to be of the same type as the input. There are workarounds for list and tuple, such as exploiting the `*` operator, but not for dict. Here is an example of the desired usage:
```
import torch
from custom_cpp_impl import CustomClass
cls = CustomClass()
class MyCustomFunction(torch.autograd.Function):
@staticmethod
def forward(ctx, input_dict):
ctx.save_for_backward(input_dict)
output = cls.forward(input_dict)
return output
@staticmethod
def backward(ctx, grad_output):
input_dict = ctx.saved_tensors
grad_dict = cls.backward(input_dict, grad_output) # Keys would be same as input_dict
return grad_dict
# Run in "xy" mode
input_dict0 = {'x': torch.tensor(2.0, requires_grad=True),
'y': torch.tensor(3.0, requires_grad=True)}
output0 = MyCustomFunction.apply(input_dict0)
output0.backward()
print(input_dict0['x'].grad)
print(input_dict0['y'].grad)
# Run in "yz" mode
input_dict1 = {'y': torch.tensor(3.0, requires_grad=True),
'z': torch.tensor(4.0, requires_grad=True)}
output1 = MyCustomFunction.apply(input_dict1)
output1.backward()
print(input_dict1['y'].grad)
print(input_dict1['z'].grad)
```
The returned gradients dict from `backward()` could have the same keys as the input dict, or be prefixed with `'grad_'` for example.
Currently when you try something like this you get `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`. I believe there are many users (such as the user in [this](https://discuss.pytorch.org/t/custom-autograd-function-list-dict-input/21862) discussion) who could benefit from this to write more concise custom autograd functions rather than making a different function for each case of the dict input which can be very lengthy and redundant.
### Alternatives
The only alternative I can think of right now is to make a different autograd function for each case of the dict input. For example, if the dict can contain 'x' and 'y' or 'y' and 'z' that get processed in the C++ code differently you would have to do this:
```
import torch
from custom_cpp_impl import CustomClass
cls = CustomClass()
class MyCustomFunctionXY(torch.autograd.Function):
@staticmethod
def forward(ctx, x, y):
output = cls.forward({"x": x, "y": y})
ctx.save_for_backward(x, y)
return output
@staticmethod
def backward(ctx, grad_output):
x, y = ctx.saved_tensors
grads = cls.backward({"x": x, "y": y}, grad_output)
return grads["grad_x"], grads["grad_y"]
class MyCustomFunctionYZ(torch.autograd.Function):
@staticmethod
def forward(ctx, y, z):
output = cls.forward({"y": y, "z": z})
ctx.save_for_backward(y, z)
return output
@staticmethod
def backward(ctx, grad_output):
y, z = ctx.saved_tensors
grads = cls.backward({"y": y, "z": z}, grad_output)
return grads["grad_y"], grads["grad_z"]
input_dict0 = {'x': torch.tensor(2.0, requires_grad=True),
'y': torch.tensor(3.0, requires_grad=True)}
input_dict1 = {'y': torch.tensor(3.0, requires_grad=True),
'z': torch.tensor(4.0, requires_grad=True)}
output0 = MyCustomFunctionXY.apply(input_dict0['x'], input_dict0['y'])
output0.backward()
output1 = MyCustomFunctionYZ.apply(input_dict1['y'], input_dict1['z'])
output1.backward()
```
which is not very concise considering the C++ code is written to take in different combinations of dict input.
### Additional context
_No response_
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,768,005,003
|
[dynamo][easy] Move symnode helpers to utils
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"merging"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144165
* #143997
* #144160
* #144163
* __->__ #144158
* #144141
* #144130
* #144129
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,768,004,191
|
Revert "Add support for `contextmanager` in Dynamo (#136033)"
|
guilhermeleobas
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144157
This reverts commit 673cc88fd607ed888662e8732df0dc935841b20b.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,767,947,017
|
[MPSInductor] Add `constant`, `isinf` and `isnan` ops
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144084
* #144083
* #144050
* __->__ #144156
* #144105
* #144122
* #144051
* #144055
Per Table 6.5 of [Metal Language Specification](https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf) infinity is `HUGE_VALF`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,767,918,290
|
[ez] Use strip for arg sanitization in upload_metadata_file to improve readability
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Minor thing that improves readability. I didn't realize you could specify characters for strip when I wrote this
| true
|
2,767,918,036
|
fix memleak, detach instead of clone to not drag around graph
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic"
] | 5
|
CONTRIBUTOR
|
Thanks @clee2000 for bringing the memleak to my attention: https://github.com/pytorch/pytorch/actions/runs/12549765082/job/34996244798.
This memleak in the test was caused by the differentiable flavors. Because we had param.clone() and param persisted outside the for loop, the autograd graph would continue growing for each optimizer.step instead of being deleted after the optim input was used up.
To clarify, I had still expected (and still do expect) the test to fully clean everything up once the test is over, but I didn't get the chance to look into why that's not the case. This change would preliminarily unblock this particular test from failing the memleak CI.
Use detach instead of clone, which is...cheaper anyway :D since a detach I've learned from @soulitzer is a view with requires_grad=False
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144154
| true
|
2,767,835,554
|
Update copyright year to 2025
|
kuraga
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
| null | true
|
2,767,810,384
|
`torch.device(0)` makes CUDA init fail in subprocess since `2.5.0`
|
cbensimon
|
closed
|
[
"high priority",
"module: cuda",
"triaged",
"module: regression",
"module: accelerator"
] | 11
|
NONE
|
### 🐛 Describe the bug
```python
from multiprocessing import Process
import torch
torch.device(0) # Note that torch.device('cuda') or torch.device('cuda:0') do not trigger the issue
def cuda_init():
torch.Tensor([0]).cuda()
p = Process(target=cuda_init)
p.start()
p.join()
assert p.exitcode == 0
```
This code snippet succeeds on PyTorch `2.4.1` and fails on `2.5.0`:
```
RuntimeError: CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
Indeed, since `2.5.0`, `torch.device(0)` calls `at::getAccelerator`, which ends up calling `cudaGetDeviceCount` and thus initializing CUDA and preventing forks
It seems to be directly linked with:
- https://github.com/pytorch/pytorch/pull/131811
(especially the change in `torch/csrc/utils/python_arg_parser.h`)
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.10.9 (main, Feb 3 2023, 11:29:04) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1028-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.0.221
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2820.130
BogoMIPS: 5599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 2 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.7.101
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.2.10.91
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.0.1
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.4.91
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu11==2.14.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.7.91
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @ptrblck @eqy @albanD @guangyey @EikanWang
| true
|
2,767,606,919
|
fix a bug for fft c2c stride in cpu
|
ywq880611
|
closed
|
[
"triaged",
"open source",
"module: fft"
] | 5
|
CONTRIBUTOR
|
Fixes #144150
Please see details in the issue.
cc @mruberry
| true
|
2,767,606,307
|
Wrong stride for fft_c2c for cpu tensor
|
ywq880611
|
closed
|
[
"module: cpu",
"triaged",
"module: mkl",
"module: fft"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
## Summary
I found there are some diffs for output tensor's stride for `fft_c2c` op on different device, is it an expected behavior? I also found there are some other issues already have, but it seems this kind of problem was still not fixed yet... and it may block this [PR](https://github.com/pytorch/pytorch/pull/140399), so I file an new issue here.
## Case
For the below code snippet:
```
import torch
a = torch.zeros((5, 6, 7), dtype=torch.complex64, device='cpu')
print(a.shape)
print(a.stride())
b = torch._fft_c2c(a, [0], 2, False)
print(b.shape)
print(b.stride())
```
For `cpu` tensor, its output is:
```
torch.Size([5, 6, 7])
(42, 7, 1)
cpu _fft_c2c_mkl 1
torch.Size([5, 6, 7])
(42, 7, 1)
```
For both `cuda` and `meta` tensor, its output is:
```
torch.Size([5, 6, 7])
(42, 7, 1)
torch.Size([5, 6, 7])
(1, 35, 5)
```
We could see the stride for `cpu` is **`(42, 7, 1)`**, but for other device is **`(1, 35, 5)`**, hence I though the implement of `_fft_c2c` for `cpu` tensor may be wrong.
## Analysis
1. For `meta` device, there is a long helper function to keep it stride same as `cuda` mode:
https://github.com/pytorch/pytorch/blob/a1ae8fadc709f7c18788f8828ee3166f96245bec/torch/_meta_registrations.py#L221-L271
2. For `cpu` device, there are two kinds of implement `pocketfft` and `mlk`.
A) if we enable `pocketfft` at build time:
https://github.com/pytorch/pytorch/blob/a1ae8fadc709f7c18788f8828ee3166f96245bec/aten/src/ATen/native/mkl/SpectralOps.cpp#L309-L328
There is no logic to make the stride align with `cuda` mode.
B) if we just use `mlk`:
https://github.com/pytorch/pytorch/blob/a1ae8fadc709f7c18788f8828ee3166f96245bec/aten/src/ATen/native/mkl/SpectralOps.cpp#L562-L571
It will call to `_exec_fft`, which will keep the output's stride align with `cuda` mode.
## Potential solution
I thought we could port some logic in `_exec_fft` in `mlk` to `pocketfft` to align the stride, I drafted a PR, PTAL, thanks!
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gitf3ec745
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~22.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7219.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (12 instances)
L1i cache: 512 KiB (12 instances)
L2 cache: 12 MiB (9 instances)
L3 cache: 25 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.1
[pip3] torch==2.6.0a0+gitf3ec745
[pip3] triton==3.1.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0a0+gitf3ec745 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mruberry
| true
|
2,767,510,309
|
S390x cancelled jobs cleanup
|
AlekseiNikiforovIBM
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/s390"
] | 4
|
COLLABORATOR
|
Sometimes job is cancelled during nested docker container creation.
This leads to nested docker container not being stopped and worker hanging forever in the job.
Improve nested docker containers cleanup for these cases.
| true
|
2,767,454,835
|
Unable to parse and visualize "trace view" with tensorboard
|
Neronjust2017
|
open
|
[
"module: tensorboard",
"oncall: profiler"
] | 0
|
NONE
|
### 🐛 Describe the bug
I'm using the following code to profile my PyTorch model. https://github.com/Lightning-AI/pytorch-lightning/issues/20525
```python
import torch
from pytorch_lightning.profilers import PyTorchProfiler
import lightning as L
schedule = torch.profiler.schedule(
wait=2,
warmup=2,
active=5,
repeat=10
)
profiler = PyTorchProfiler(
dirpath="{tbprofiler_path}",
filename="trace",
schedule=schedule,
export_to_chrome=True,
with_stack=True,
record_shapes=True,
record_module_names=True,
profile_memory=True
)
trainer_arg_dict["profiler"] = profiler
return L.Trainer(
**trainer_arg_dict,
)
```
The code terminated normally, however, I got this error when using tensorboard to visualize the trace results.

and the overview result also seems incorrect.

The CPU peak memory usage is only 3.0, which is also quite strange. Any suggestions about this? Thanks.

### Versions
PyTorch version: 2.3.0a0+6ddf5cf85e.nv24.04
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.199.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] cudnn==1.1.2
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] onnxruntime==1.16.0
[pip3] optree==0.11.0
[pip3] pynvjitlink==0.1.13
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==3.0.0+a9bc1a364
[pip3] torch==2.3.0a0+6ddf5cf85e.nv24.4
[pip3] torch-scatter==2.1.2
[pip3] torch-tensorrt==2.3.0a0
[pip3] torchdata==0.7.1a0
[pip3] torchmetrics==1.4.2
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.18.0a0
[conda] Could not collect
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,767,208,118
|
Introduce test skip markers for Sandcastle
|
Flamefire
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"triaged",
"open source",
"Stale",
"release notes: package/deploy"
] | 2
|
COLLABORATOR
|
Simplify the markers a bit to make them more expressive
It also makes it easier to skip those tests "manually" by changing the single definition of the skip marker.
This is important to reduce potential false positives (of failed tests) in some environments, such as HPC clusters
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,767,202,095
|
Skip ao_sparsity TestComposability for missing FBGEMM
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"Merged",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Those tests (from test_ao_sparsity) require FBGEMM which may not be available. So add the skip decorator.
Fixes #87364
| true
|
2,767,154,255
|
[reland][attempt2][AMD] Turn on TF32 for aten::mm
|
xw285cornell
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 5
|
CONTRIBUTOR
|
Summary:
https://github.com/pytorch/pytorch/pull/143549 was reverted due to some
internal/oss tooling issue. Relanding.
hipblaslt supports TF32, so adding the support.
Original PR https://github.com/pytorch/pytorch/pull/139869
Test Plan: CI
Differential Revision: D67785496
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,767,093,520
|
torch.onnx.export failed with Process finished with exit code 136 (interrupted by signal 8:SIGFPE)
|
My-captain
|
closed
|
[
"module: crash",
"module: onnx",
"triaged"
] | 21
|
NONE
|
### 🐛 Describe the bug
Hello everyone,
I am attempting to export the visual encoder (SigLIP-400M) of MiniCPMV-2.6 along with the modality projection module (Resampler) to ONNX. Here are the issues I'm encountering:
1. The export fails with an exit code 136 (interrupted by signal 8: SIGFPE), regardless of whether dynamo mode is used or not.
2. This exception cannot be caught by Python, and the reproducible code is located in the `v26_export_demo` within the following zip package.
Could anyone help me with this problem?
[v26quant.zip](https://github.com/user-attachments/files/18297225/v26quant.zip)
### Versions
pytorch 2.1.2+cu121
python 3.10
| true
|
2,767,081,296
|
Bug in using Intel A750 GPU
|
ca1ic0
|
closed
|
[
"triaged",
"module: xpu"
] | 14
|
NONE
|
### 🐛 Describe the bug
According to the [https://pytorch.org/docs/main/notes/get_start_xpu.html](url),
0. i install ubuntu24.04 OS
1. i install gpu driver
2. i install the intel ai essensial package
3. i install the miniconda and create new env
```
conda create -n torch-xpu python
```
4. install the preview torch
```
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/test/xpu
```
5. use torch
```
(torch-xpu) calico@calico-B450M-HDV-R4-0:~$ python
Python 3.13.1 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:29:23) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.xpu.is_available()
/home/calico/miniconda3/envs/torch-xpu/lib/python3.13/site-packages/torch/xpu/__init__.py:60: UserWarning: Failed to initialize XPU devices. The driver may not be installed, installed incorrectly, or incompatible with the current setup. Please refer to the guideline (https://github.com/pytorch/pytorch?tab=readme-ov-file#intel-gpu-support) for proper installation and configuration. (Triggered internally at /pytorch/c10/xpu/XPUFunctions.cpp:54.)
return torch._C._xpu_getDeviceCount()
False
```
and i also notice that ,when i source the oneapi-umf env, report a warning
```
(torch-xpu) calico@calico-B450M-HDV-R4-0:~$ source /opt/intel/oneapi/umf/0.9/env/vars.sh
WARNING: hwloc library not found in /tcm/latest/lib
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.13.1 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:29:23) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 5 5500
CPU family: 25
Model: 80
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 0
CPU(s) scaling MHz: 68%
CPU max MHz: 4267.0000
CPU min MHz: 400.0000
BogoMIPS: 7200.13
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 3 MiB (6 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] pytorch-triton-xpu==3.2.0
[pip3] torch==2.6.0+xpu
[pip3] torchaudio==2.6.0+xpu
[pip3] torchvision==0.21.0+xpu
[pip3] triton==3.2.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] pytorch-triton-xpu 3.2.0 pypi_0 pypi
[conda] torch 2.6.0+xpu pypi_0 pypi
[conda] torchaudio 2.6.0+xpu pypi_0 pypi
[conda] torchvision 0.21.0+xpu pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,767,024,119
|
[torch.compile] Errors on autograd.Function forward returns non-Tensor
|
yanboliang
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"dynamo-autograd-function"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```
import torch
from torch.autograd import Function
class MyFunction(Function):
@staticmethod
def forward(ctx, x):
return x, [1, 2, 3] # Tensor and list of integers
@staticmethod
def backward(ctx, grad_output1, grad_output2):
return grad_output1
x = torch.tensor(2.0, requires_grad=True)
@torch.compile(backend="aot_eager", fullgraph=True)
def fn(x):
return MyFunction.apply(x)
y = fn(x)
print(y)
y[0].backward()
print(x.grad)
```
Error stack:
```
File "/home/ybliang/local/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 2420, in call_function
(fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph(
File "/home/ybliang/local/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 693, in speculate_subgraph
raise ex
File "/home/ybliang/local/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 589, in speculate_subgraph
unimplemented(
File "/home/ybliang/local/pytorch/torch/_dynamo/exc.py", line 322, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: HigherOrderOperator body's output must consist of tensors only
from user code:
File "/data/users/ybliang/debug/debug5.py", line 18, in fn
return MyFunction.apply(x)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
This issue was discovered while I'm working on #143811. Although the typical approach to passing constants to the backward function is by saving them in ```ctx```, we still need to ensure it works seamlessly in compile mode, as it already works correctly in eager mode.
### Versions
N/A
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225
| true
|
2,767,008,867
|
[dynamo][easy] Miscellaneous fixes
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144062
* #144061
* #143997
* __->__ #144141
* #144130
* #144129
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,767,008,557
|
[Intel GPU][Inductor] Convert Conv1D to 2D in inductor
|
jianyizh
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/xpu"
] | 4
|
CONTRIBUTOR
|
Layout optimization in inductor does not apply to Conv1D. We convert Conv1D to channel last Conv2D for better performance on Intel GPU. For example, demucs fp16 inference in torchbench can improve from 149ms to 91ms on Max 1100.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,947,941
|
Update TorchDynamo-based ONNX Exporter memory usage example code.
|
fatcat-z
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: docs"
] | 4
|
COLLABORATOR
|
Address related comments earlier.
| true
|
2,766,932,075
|
Increase C10_COMPILE_TIME_MAX_GPUS to 128
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 20
|
COLLABORATOR
|
To facilitate further possible changes of DeviceIndex to int16_t.
| true
|
2,766,920,691
|
Use more bits to represent `DeviceIndex`
|
kstreee-furiosa
|
closed
|
[] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
In some case, current 8 bits representation of DeviceIndex not enough to cover all devices. (actually, it is 7 bits, Device.h requires non-negative value for DeviceIndex, so the first bits can't use)
In our case ([FuriosaAI](https://furiosa.ai/)), RNGD has 8 PEs within a single chip, and it can be fused with each others, to specify all combination of PE units requires 8 + 4 (2PE fusion) + 2 (4PE fusion). It requires at least 4 bits to represent all independent PE identification within a chip. Then we only have 3 bits to specify NPU, it only covers 7 NPUs, which is not enough.
### Alternatives
Use at least 16bits to represent DeviceIndex
### Additional context
I'm willing to contribute for this issue, but don't know what is the proper process of contribution (approval for change first? or code change first?). Any guide for contribution is welcomed.
| true
|
2,766,908,872
|
remove allow-untyped-defs from nn/utils/_deprecation_utils.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144136
| true
|
2,766,908,831
|
remove allow-untyped-defs from export/_remove_auto_functionalized_pass.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144136
* __->__ #144135
| true
|
2,766,908,798
|
remove allow-untyped-defs onnx/_internal/exporter/_fx_passes.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144136
* #144135
* __->__ #144134
| true
|
2,766,908,767
|
remove allow-untyped-defs from torch/onnx/operators.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144136
* #144135
* #144134
* __->__ #144133
| true
|
2,766,908,735
|
remove allow-untyped-defs from torch/jit/_passes/_property_propagation.py
|
bobrenjc93
|
closed
|
[
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: jit",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144136
* #144135
* #144134
* #144133
* __->__ #144132
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,766,908,694
|
remove allow-untyped-defs from torch/distributed/fsdp/_dynamo_utils.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"module: typing",
"Merged",
"release notes: distributed (fsdp)",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144136
* #144135
* #144134
* #144133
* #144132
* __->__ #144131
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @malfet @xuzhao9 @gramster
| true
|
2,766,905,749
|
[dynamo][easy] Minor fixes in guards.cpp
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144062
* #144061
* #143997
* #144141
* __->__ #144130
* #144129
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,766,905,690
|
[dynamo] remove inline inbuilt tests as flag is enabled by default
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144062
* #144061
* #143997
* #144141
* #144130
* __->__ #144129
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,766,876,459
|
Enable IPO on torch targets
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Stale",
"ciflow/binaries",
"topic: not user facing"
] | 13
|
COLLABORATOR
|
Try IPO builds
| true
|
2,766,857,474
|
Fix C++20 Wambiguous-reversed-operator warnings
|
cyyever
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,766,847,613
|
Use sccache 0.9.0 on ROCm build job
|
huydhn
|
closed
|
[
"module: rocm",
"Merged",
"topic: not user facing",
"test-config/default",
"ciflow/rocm"
] | 7
|
CONTRIBUTOR
|
TSIA, sccache 0.9.0 seems to work fine with ROCm build job
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.