id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,891,606,244
|
DISABLED test_mark_unbacked_strict (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamic shapes"
] | 4
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mark_unbacked_strict&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38092982283).
Over the past 3 hours, it has been determined flaky in 27 workflow(s) with 54 failures and 27 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mark_unbacked_strict`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_misc.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_misc.py 200 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 1)
headers: {"connection":"keep-alive","content-length":"385085","cache-control":"max-age=300","content-security-policy":"default-src 'none'; style-src 'unsafe-inline'; sandbox","content-type":"text/plain; charset=utf-8","etag":"\"e43be13567bf435efe6ac17d0dc179e3d4150c680ceac3e4d18859cad954d17a\"","strict-transport-security":"max-age=31536000","x-content-type-options":"nosniff","x-frame-options":"deny","x-xss-protection":"1; mode=block","x-github-request-id":"2E2F:19A1CD:1E1B6E0:27228F3:67C5CDCA","accept-ranges":"bytes","date":"Mon, 03 Mar 2025 15:42:04 GMT","via":"1.1 varnish","x-served-by":"cache-sjc1000109-SJC","x-cache":"MISS","x-cache-hits":"0","x-timer":"S1741016525.751756,VS0,VE188","vary":"Authorization,Accept-Encoding,Origin","access-control-allow-origin":"*","cross-origin-resource-policy":"cross-origin","x-fastly-request-id":"b5a3606d22e59ccce7e12caccb7f912a4ad75290","expires":"Mon, 03 Mar 2025 15:47:04 GMT","source-age":"0"}
cc @clee2000 @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,891,606,243
|
DISABLED test_export_defaults_ok_dynamic_shapes (__main__.DynamicShapesExportTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: asan, linux, rocm, slow, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_export_defaults_ok_dynamic_shapes&suite=DynamicShapesExportTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38095191629).
Over the past 3 hours, it has been determined flaky in 22 workflow(s) with 44 failures and 22 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_export_defaults_ok_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,891,605,755
|
DISABLED test_sys_modules_dynamic_shapes (__main__.DynamicShapesMiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 4
|
NONE
|
Platforms: mac, macos, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sys_modules_dynamic_shapes&suite=DynamicShapesMiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38097103660).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sys_modules_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @malfet @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,891,553,598
|
[AOTI][torchbench] microbench_unbacked_tolist_sum fails
|
desertfire
|
open
|
[
"triaged",
"oncall: pt2",
"export-triaged",
"oncall: export",
"module: aotinductor"
] | 0
|
CONTRIBUTOR
|
Repro:
```
python benchmarks/dynamo/torchbench.py --accuracy --inference --bfloat16 --export-a
ot-inductor --disable-cudagraphs --device cuda --only microbench_unbacked_tolist_sum
```
Error:
```
RuntimeError: Failed to run autotuning code block: An exception occurred in a subprocess:
...
RecursionError: maximum recursion depth exceeded
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @chenyang78 @yushangdi
| true
|
2,891,440,632
|
[pytree] simplify public API exposition with `__module__`
|
XuehaiPan
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: pytree",
"module: dynamo",
"ciflow/inductor",
"ci-test-showlocals"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148328
* #148180
* #137400
* #152624
Before this PR, the following statements are already available:
```python
import torch
import torch.utils.pytree.python
from torch.utils.pytree import python
from torch.utils.pytree import cxx
from torch.utils.pytree.python import tree_map
torch.utils.pytree.python
torch.utils.pytree.python.tree_map
torch.utils.pytree.cxx
torch.utils.pytree.cxx.tree_map
```
This PR makes the following extra statements available:
```python
import torch.utils.pytree.cxx
from torch.utils.pytree.cxx import tree_map
```
cc @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,891,417,016
|
[ROCm] Unskip flex attention UTs after triton 3.3 bump
|
AmdSampsa
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 11
|
COLLABORATOR
|
Enable `test_flex_attention.py::TestLearnableBiases` unit tests.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @EikanWang @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan @yf225
| true
|
2,891,378,626
|
fx graph fails to recognize tensor.T as a 'call_method' node
|
XinyiYuan
|
closed
|
[
"triaged",
"module: fx",
"oncall: pt2"
] | 3
|
NONE
|
### 🐛 Describe the bug
I was trying to understand the mechanisms of fx graphs when i encountered this problem:
```python
import torch
class MyLinear(torch.nn.Module):
def __init__(self, in_features, out_features):
super().__init__()
self.weight = torch.nn.Parameter(torch.randn(out_features, in_features))
self.bias = torch.nn.Parameter(torch.randn(out_features))
def forward(self, x):
return torch.matmul(x, self.weight.T) + self.bias
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.param = torch.nn.Parameter(torch.rand(3, 4))
self.linear = MyLinear(4, 5)
def forward(self, x):
w = self.linear.weight
x = x + w
x = self.linear(x)
x = x.relu()
x = torch.sum(x, dim=-1)
x = torch.topk(x, 3)
return x
model = MyModule()
trace = torch.fx.symbolic_trace(model)
print(trace.graph)
```
`self.weight.T` in `class MyLinear` is supposed to be recognized as:
```
%weight : get_attr[target=weight]
%T: call_method[target=T](args=(%weight, ), kwargs={})
```
But the result turns out to be:

where `self.weight.T` is mistaken as an attribute
BTW, according to the output graph, fx graph seems to inline the `MyLinear.forward` function instead of transforming it into a 'call_module' node. Does this mean that all user-defined module will be inlined and broke down while only modules provided by torch (e.g., torch.nn.Linear) can be recognized as a 'call_module' node?
### Versions
torch==2.2.2
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @chauhang @penguinwu
| true
|
2,891,127,941
|
Create unique test report files for distributed tests
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
The distributed tests are executed once for each backend and for each init method.
`$TEST_REPORT_SOURCE_OVERRIDE` is used such that test results from different backends are stored in different files.
The same needs to be done for the init method.
Move the setting of the variable into `test_distributed` and incorporate the init method into the name.
Useful for e.g. https://github.com/pytorch/pytorch/issues/126523
| true
|
2,891,100,890
|
`torch.linalg` routines break for inputs of more than 2**32 elements
|
cybersupersoap
|
open
|
[
"triaged",
"module: linear algebra",
"topic: fuzzer"
] | 4
|
NONE
|
### 🐛 Describe the bug
I have packed six `INTERNAL ASSERT FAILED`, two `Segmentation fault`, and two `Floating point exception` into this issue. I am reporting these because the error message says 'please report a bug to PyTorch', or because they caused crashes.
A `INTERNAL ASSERT FAILED` at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp" will be raised when using `torch.Tensor.pinverse`, `torch.linalg.cond`, `torch.linalg.pinv`, `torch.svd` and `torch.cholesky_solve`.
Here are their respective standalone reproduction codes and error messages:
```python
import torch
_input_tensor = torch.rand(2**31, 3)
_output_tensor = torch.Tensor.pinverse(_input_tensor)
```
```
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. linalg.svd: Argument 2 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
```python
import torch
A = torch.rand(2**31, 3)
cond_A = torch.linalg.cond(A)
```
```
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. linalg.svd: Argument 2 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
```python
import torch
import torch
A = torch.randn((2**31, 3))
A_inv = torch.linalg.pinv(A)
```
```
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. linalg.svd: Argument 2 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
```python
import torch
arg_1_tensor = torch.rand([3, 2**31], dtype=torch.float32)
arg_1 = arg_1_tensor.clone()
arg_2_tensor = torch.rand([3, 3], dtype=torch.float32)
arg_2 = arg_2_tensor.clone()
res = torch.cholesky_solve(arg_1, arg_2)
```
```
Intel oneMKL ERROR: Parameter 3 was incorrect on entry to SPOTRS.
false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. cholesky_solve_cpu: Argument 3 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
```python
import torch
arg_1_tensor = torch.rand([2, 2**31], dtype=torch.float32) # Setting the size of the input data to a large value
arg_1 = arg_1_tensor.clone()
arg_2 = False
res = torch.svd(arg_1, compute_uv=arg_2)
```
```
Intel oneMKL ERROR: Parameter 3 was incorrect on entry to SGESDD.
false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. linalg.svd: Argument 3 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
---
A `INTERNAL ASSERT FAILED` at "/pytorch/aten/src/ATen/NamedTensorUtils.cpp" will be raised when using `torch.tensor`
```python
import torch
tensor_names = []
x = torch.tensor([[1, 2, 3, 4], [4, 3, 2, 1]], dtype=torch.float32, names=tensor_names)
```
Error messages:
```
RuntimeError: !names.empty() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/NamedTensorUtils.cpp":163, please report a bug to PyTorch. propagate_names: passed in empty names to propagate to result with shape [2, 4]. Empty names means that name inference didnot occur; use `propagate_names_if_nonempty` instead of `propagate_names`.
```
---
A `Floating point exception (core dumped)` will be raised when using `torch.nn.functional.conv1d` and `torch.nn.LazyConvTranspose1d`.
Here are their respective standalone reproduction codes and error messages:
```python
try:
import torch
input_data = torch.randn(3, 5, 7)
conv1d_transpose = torch.nn.LazyConvTranspose1d(3, 2, stride=2**31, padding=1)
output_data = conv1d_transpose(input_data)
except Exception as e:
pass
```
```
Floating point exception (core dumped)
```
```python
import torch
arg_1_tensor = torch.rand([20, 16, 50], dtype=torch.float32)
arg_1 = arg_1_tensor.clone()
arg_2_tensor = torch.rand([33, 16, 3], dtype=torch.float32)
arg_2 = arg_2_tensor.clone()
arg_3_tensor = torch.rand([33], dtype=torch.float32)
arg_3 = arg_3_tensor.clone()
arg_4_0 = 2**32
arg_4 = [arg_4_0]
arg_5_0 = 0
arg_5 = [arg_5_0]
arg_6_0 = 1
arg_6 = [arg_6_0]
arg_7 = 1
res = torch.nn.functional.conv1d(arg_1, arg_2, arg_3, arg_4, arg_5, arg_6, arg_7)
```
```
Floating point exception (core dumped)
```
---
A `Segmentation fault (core dumped)` will be raised when using `torch.sparse.sum` and `torch.nansum`.
Here are their respective standalone reproduction codes and error messages:
```python
import torch
input = torch.sparse_coo_tensor(torch.tensor([[0, 1, -1], [2, 0, 2]]), torch.tensor([1, 2, 3]), torch.Size([3, 3]))
torch.sparse.sum(input, dim=-1)
```
```
Segmentation fault (core dumped)
```
```python
import torch
import numpy as np
arg_1_tensor = torch.rand([2, 2], dtype=torch.float32)
arg_1 = arg_1_tensor.clone()
arg_2 = np.array(0)
res = torch.nansum(arg_1, dim=arg_2)
```
```
Segmentation fault (core dumped)
```
All of the above bugs are reproducible with the nightly-build version 2.7.0.dev20250208+cpu.
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,890,919,312
|
[Windows][Inductor][XPU] Unload triton pyd files to be able to remove them on Windows.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148323
* #147727
* #148538
* #148534
In `fresh_inductor_cache` remove pyd files will raise permission error
on Windows because they are still used by the process.
So we clear the references to the loaded pyd libray obj and unload them
from the process.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,890,886,854
|
Checking for cuda version to see if bf16 is natively supported or emulated
|
KennyStryker
|
open
|
[
"module: cuda",
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 5
|
NONE
|
BF16 is only natively supported on Ampere architecture or higher with CUDA.
Previously, using `torch.cuda.is_bf16_supported()` would return `True` even on hardware such as the Nvidia RTX 2000 series, Nvidia Tesla T4, and other devices below capability 8.0 (Ampere). This was misleading because, starting from CUDA 10:
- BF16 operations are softwared-emulated by shifting inputs to the left and doing computations in fp32, so using bf16 on non-supported hardware doesn't throw an exception.
- Software emulation is slow.
- Optimized kernels (like cublas) are missing.
With the current changes, the function will still return `True` on unsupported hardware, but it will now also display a warning.
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
2,890,799,310
|
Huge numerical precision error when `torch.tensor(3811, dtype=torch.float16)`
|
wangzhen0518
|
closed
|
[
"module: bfloat16",
"module: half"
] | 1
|
NONE
|
### 🐛 Describe the bug
There is huge numerical precision error when initializing tensors with some value in torch.float16 and torch.bfloat16.
```python
import torch
torch.tensor(3811, dtype=torch.float16) # tensor(3812., dtype=torch.float16)
torch.tensor(3811, dtype=torch.bfloat16) # tensor(3808., dtype=torch.bfloat16)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by conda-forge | (main, Feb 14 2025, 08:00:06) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 550.76
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13700F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
CPU max MHz: 5200.0000
CPU min MHz: 800.0000
BogoMIPS: 4224.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 24 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
| true
|
2,890,786,262
|
[ATen][CUDA] Optimize 128 bit vectorization
|
Aidyn-A
|
closed
|
[
"module: cuda",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/periodic",
"ciflow/binaries_wheel",
"module: core aten"
] | 11
|
COLLABORATOR
|
Fixes #147376.
As per request: https://github.com/pytorch/pytorch/pull/145746#pullrequestreview-2642118301
This PR omits sm80 or older of using vec8 kernels due to long compilation and large binary size.
cc @ptrblck @msaroufim @eqy @jerryzh168 @manuelcandales @SherlockNoMad @angelayi
| true
|
2,890,768,223
|
[CD] Upgrade xpu runtime pypi packages version and enable windows kineto again
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 8
|
COLLABORATOR
|
Fixes https://github.com/pytorch/pytorch/issues/145155
| true
|
2,890,674,871
|
[TEST][SPARSE] Simplify branching in test_cusparselt_backend
|
Aidyn-A
|
closed
|
[
"module: tests",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Due to introduction of CUDA versions, the branching becomes more complicated. This PR is proposed to simplify branching in `test_cusparselt_backend` in order to avoid checking each and every CUDA version.
cc @mruberry @ZainRizvi
| true
|
2,890,648,304
|
compile SageAttention faing error C2872: “std” for latest torch nightly
|
NOFOX
|
open
|
[
"module: windows",
"triaged",
"module: regression",
"oncall: pt2"
] | 10
|
NONE
|
ENV: Win11 ,VS2022, Torch:
print(torch.version)
2.7.0.dev20250302+cu128
print(torchvision.version)
0.22.0.dev20250302+cu128
print(torch.cuda.is_available())
True
print(torch.cuda.get_device_name(0))
NVIDIA GeForce RTX 5080
print(torch.cuda.get_device_capability(0))
(12, 0)
compile SageAttention faing error C2872: “std”
C:/tools/AI/ComfyUI-aki-v1.4/python/lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1076): error C2872: “std”: 不明确的符号
C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.43.34808/include\valarray(20): note: 可能是“std”
C:/tools/AI/ComfyUI-aki-v1.4/python/lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1076): note: 或 “std”
C:/tools/AI/ComfyUI-aki-v1.4/python/lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1076): note: 模板实例化上下文(最早的实例化上下文)为
C:/tools/AI/ComfyUI-aki-v1.4/python/lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1121): note: 查看对正在编译的 类 模板 实例化“torch::dynamo::autograd::IValuePacker<__int64>”的引用
C:/tools/AI/ComfyUI-aki-v1.4/python/lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1059): note: 在编译 类 模板 成员函数“c10::TypePtr torch::dynamo::autograd::IValuePacker<__int64>::packed_type(void)”时
C:/tools/AI/ComfyUI-aki-v1.4/python/lib/site-packages/torch/include\torch/csrc/dynamo/compiled_autograd.h(1121): note: 请参阅 "torch::dynamo::autograd::IValuePacker<unsigned __int64>::packed_type" 中对 "torch::dynamo::autograd::IValuePacker<__int64>::packed_type" 的第一个引用
ninja: build stopped: subcommand failed.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu
| true
|
2,890,533,255
|
do not run `test_ck_blas_library` on cpu
|
oraluben
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Fix on non-rocm:
```
root@e01-tw-ue5g2g3sap6:~/pytorch/test# python test_linalg.py TestLinalgCPU.test_ck_blas_library_cpu
E
======================================================================
ERROR: test_ck_blas_library_cpu (__main__.TestLinalgCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/root/pytorch/torch/testing/_internal/common_utils.py", line 3108, in wrapper
method(*args, **kwargs)
File "/root/pytorch/torch/testing/_internal/common_device_type.py", line 480, in instantiated_test
raise rte
File "/root/pytorch/torch/testing/_internal/common_device_type.py", line 460, in instantiated_test
result = test(self, **param_kwargs)
File "/root/pytorch/torch/testing/_internal/common_device_type.py", line 1242, in dep_fn
return fn(slf, *args, **kwargs)
File "/root/pytorch/torch/testing/_internal/common_utils.py", line 1981, in _fn
fn(*args, **kwargs)
File "/root/pytorch/test/test_linalg.py", line 8621, in test_ck_blas_library
torch.backends.cuda.preferred_blas_library('ck')
File "/root/pytorch/torch/backends/cuda/__init__.py", line 258, in preferred_blas_library
torch._C._set_blas_preferred_backend(_BlasBackends[backend])
RuntimeError: Cannot set preferred backend to Ck if PyTorch has not been compiled for ROCm.
To execute this test, run the following from the base repo dir:
python test/test_linalg.py TestLinalgCPU.test_ck_blas_library_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1 test in 0.346s
FAILED (errors=1)
```
| true
|
2,890,423,982
|
[Doc] [Win] libuv installation doc is not correct.
|
Stonepia
|
open
|
[
"module: windows",
"module: docs",
"triaged",
"topic: docs"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The Windows build recipe requires the following steps when building PyTorch:
```Bash
conda install -c conda-forge libuv=1.39
```
This would throw the following error:
```Bash
>conda install -c conda-forge libuv=1.39 -y
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- libuv=1.39
Current channels:
- https://conda.anaconda.org/conda-forge
```
Actually, there is no need to install libuv individually. It is offered as a dependency of cmake already:
```Bash
> conda install cmake
Channels:
- conda-forge
Platform: win-64
## Package Plan ##
environment location: C:\Users\sdp\miniforge3\envs\pt
added / updated specs:
- cmake
The following NEW packages will be INSTALLED:
cmake conda-forge/win-64::cmake-3.31.6-hff78f93_0
...
libuv conda-forge/win-64::libuv-1.50.0-h2466b09_0
zstd conda-forge/win-64::zstd-1.5.7-hbeecb71_1
```
### Versions
-
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @svekars @sekyondaMeta @AlannaBurke
| true
|
2,890,387,684
|
The recorded step number in profiler is wrong
|
Qizhi697
|
open
|
[
"oncall: profiler"
] | 0
|
NONE
|
### 🐛 Describe the bug
When wait=1, the profiler recorded fn[dispatch/conbine] 30/10=3 steps.
```python
schedule = torch.profiler.schedule(wait=1, warmup=1, active=2, repeat=0)
with torch.profiler.profile(activities=[torch.profiler.ProfilerActivity.CPU,torch.profiler.ProfilerActivity.CUDA], schedule=schedule) as prof:
for i in range(4):
dist.all_reduce(torch.ones(1, dtype=torch.float, device='cuda'))
for _ in range(10):
fn()
prof.step()
```
```table
---------------------------------------------------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
---------------------------------------------------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
ProfilerStep* 0.00% 0.000us 0.00% 0.000us 0.000us 755.137ms 63.45% 755.137ms 188.784ms 4
void deep_ep::internode_ll::dispatch<3, 10, 7168>(void*, float*, int*, long*, void*, int*, void*,... 0.00% 0.000us 0.00% 0.000us 0.000us 723.111ms 60.76% 723.111ms 24.104ms 30
void deep_ep::internode_ll::combine<3, 10, 7168, 9>(void*, void*, int*, void*, void const*, long ... 0.00% 0.000us 0.00% 0.000us 0.000us 408.413ms 34.31% 408.413ms 13.614ms 30
ProfilerStep* 0.09% 1.129ms 0.20% 2.360ms 1.180ms 0.000us 0.00% 58.665ms 29.333ms 2
c10d::allreduce_ 0.01% 65.902us 0.02% 239.234us 119.617us 0.000us 0.00% 58.662ms 29.331ms 2
record_param_comms 0.01% 137.480us 0.02% 180.768us 45.192us 58.662ms 4.93% 58.662ms 14.666ms 4
ncclDevKernel_AllReduce_Sum_f32_TREE_LL(ncclDevComm*, unsigned long, ncclWork*) 0.00% 0.000us 0.00% 0.000us 0.000us 58.662ms 4.93% 58.662ms 29.331ms 2
nccl:all_reduce 0.00% 0.000us 0.00% 0.000us 0.000us 58.662ms 4.93% 58.662ms 29.331ms 2
aten::ones 0.00% 20.084us 0.01% 121.210us 60.605us 0.000us 0.00% 2.656us 1.328us 2
aten::fill_ 0.00% 25.991us 0.01% 59.970us 29.985us 2.656us 0.00% 2.656us 1.328us 2
void at::native::vectorized_elementwise_kernel<4, at::native::FillFunctor<float>, std::array<char... 0.00% 0.000us 0.00% 0.000us 0.000us 2.656us 0.00% 2.656us 1.328us 2
aten::empty 0.04% 458.314us 0.04% 518.716us 5.085us 0.000us 0.00% 0.000us 0.000us 102
cudaEventQuery 0.01% 88.335us 0.01% 88.335us 0.736us 0.000us 0.00% 0.000us 0.000us 120
cudaLaunchKernel 0.00% 33.979us 0.00% 33.979us 16.989us 0.000us 0.00% 0.000us 0.000us 2
cudaStreamIsCapturing 0.00% 2.371us 0.00% 2.371us 1.185us 0.000us 0.00% 0.000us 0.000us 2
cudaEventRecord 0.01% 69.580us 0.01% 69.580us 0.756us 0.000us 0.00% 0.000us 0.000us 92
cudaStreamWaitEvent 0.01% 73.832us 0.01% 73.832us 0.839us 0.000us 0.00% 0.000us 0.000us 88
nccl:all_reduce 0.00% 0.000us 0 128.812us 64.406us 0.000us 0.00% 0.000us 0.000us 2
cudaStreamGetCaptureInfo_v2 0.00% 2.234us 0.00% 2.234us 1.117us 0.000us 0.00% 0.000us 0.000us 2
cudaLaunchKernelExC 0.02% 211.872us 0.02% 211.872us 5.045us 0.000us 0.00% 0.000us 0.000us 42
cudaPointerGetAttributes 0.00% 14.738us 0.00% 14.738us 0.737us 0.000us 0.00% 0.000us 0.000us 20
aten::transpose 0.00% 42.192us 0.00% 54.164us 2.708us 0.000us 0.00% 0.000us 0.000us 20
aten::as_strided 0.00% 11.972us 0.00% 11.972us 0.599us 0.000us 0.00% 0.000us 0.000us 20
cudaDeviceSynchronize 99.80% 1.196s 99.80% 1.196s 1.196s 0.000us 0.00% 0.000us 0.000us 1
---------------------------------------------------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
```
When change wait=0, the profiler only recorded fn[dispatch/conbine] 10/10=1 step.
```python
schedule = torch.profiler.schedule(wait=0, warmup=1, active=2, repeat=0)
with torch.profiler.profile(activities=[torch.profiler.ProfilerActivity.CPU,torch.profiler.ProfilerActivity.CUDA], schedule=schedule) as prof:
for i in range(4):
dist.all_reduce(torch.ones(1, dtype=torch.float, device='cuda'))
for _ in range(10):
fn()
prof.step()
```
```table
---------------------------------------------------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
---------------------------------------------------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
void deep_ep::internode_ll::dispatch<3, 10, 7168>(void*, float*, int*, long*, void*, int*, void*,... 0.00% 0.000us 0.00% 0.000us 0.000us 368.628ms 76.52% 368.628ms 36.863ms 10
void deep_ep::internode_ll::combine<3, 10, 7168, 9>(void*, void*, int*, void*, void const*, long ... 0.00% 0.000us 0.00% 0.000us 0.000us 113.087ms 23.48% 113.087ms 11.309ms 10
ProfilerStep* 0.01% 44.027us 0.01% 44.027us 44.027us 0.000us 0.00% 0.000us 0.000us 1
cudaDeviceSynchronize 99.99% 497.752ms 99.99% 497.759ms 497.759ms 0.000us 0.00% 0.000us 0.000us 1
cudaEventQuery 0.00% 7.215us 0.00% 7.215us 3.607us 0.000us 0.00% 0.000us 0.000us 2
---------------------------------------------------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
```
### Versions
# python3 collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.28.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.14.0-3.0.3.kwai.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H800
GPU 1: NVIDIA H800
GPU 2: NVIDIA H800
GPU 3: NVIDIA H800
GPU 4: NVIDIA H800
GPU 5: NVIDIA H800
GPU 6: NVIDIA H800
GPU 7: NVIDIA H800
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) Platinum 8468V
BIOS Model name: Intel(R) Xeon(R) Platinum 8468V
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2401.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,890,337,857
|
[CD] Upgrade Windows xpu support package to 2025.0.1 for binary compression
|
chuanqi129
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
The binary compression feature can reduce the size of the Torch XPU Windows wheel packages
| true
|
2,890,327,920
|
DISABLED test_int_shape_inplace_binops (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2"
] | 3
|
NONE
|
Platforms: asan, linux, mac, macos, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_int_shape_inplace_binops&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38071508310).
Over the past 3 hours, it has been determined flaky in 16 workflow(s) with 32 failures and 16 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_int_shape_inplace_binops`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_misc.py", line 657, in test_int_shape_inplace_binops
torch._dynamo.testing.standard_test(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/testing.py", line 367, in standard_test
self.assertEqual(actual.op_count, expected_ops)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4092, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 1 but got 2.
Absolute difference: 1
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
python test/dynamo/test_misc.py MiscTests.test_int_shape_inplace_binops
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_misc.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_misc.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @chauhang @penguinwu
| true
|
2,890,325,688
|
DISABLED test_empty_graph_nested_calls_fullgraph_True_dynamic_shapes (__main__.DynamicShapesReproTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamic shapes"
] | 5
|
NONE
|
Platforms: asan, linux, slow, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_empty_graph_nested_calls_fullgraph_True_dynamic_shapes&suite=DynamicShapesReproTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38072422020).
Over the past 3 hours, it has been determined flaky in 14 workflow(s) with 28 failures and 14 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_empty_graph_nested_calls_fullgraph_True_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_repros.py", line 6361, in test_empty_graph_nested_calls
self.assertEqual(len(torch._dynamo.eval_frame.dynamo_tls.traced_frame_infos), 2)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4092, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 2 but got 1.
Absolute difference: 1
Relative difference: 0.5
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesReproTests.test_empty_graph_nested_calls_fullgraph_True_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_dynamic_shapes.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,890,298,153
|
ci: Switch manywheel build.sh to just use dev
|
seemethere
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143672
* #148419
* #148406
* __->__ #148310
To avoid annoying error message like:
> fatal: no tag exactly matches 'a6520c85bd85875b09f2c68e51622699d7d07595'
These were popping up when GITHUB_REF is not set so let's just assume
that if someone is building without directly setting GITHUB_REF then
they're probably doing a dev build.
Signed-off-by: Eli Uriegas <github@terriblecode.com>
| true
|
2,890,263,614
|
DISABLED test_cond_autograd_zeros_unused_branch_complex_compile_mode_compile (__main__.TestControlFlow)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22functorch%2Ftest_control_flow.py%3A%3ATestControlFlow%3A%3Atest_cond_autograd_zeros_unused_branch_complex_compile_mode_compile%22%5D)).
| true
|
2,890,263,065
|
DISABLED test_cond_autograd_zeros_unused_branch_complex_compile_mode_compile (__main__.TestControlFlow)
|
ankurneog
|
closed
|
[
"skipped"
] | 1
|
CONTRIBUTOR
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22functorch%2Ftest_control_flow.py%3A%3ATestControlFlow%3A%3Atest_cond_autograd_zeros_unused_branch_complex_compile_mode_compile%22%5D)).
| true
|
2,890,262,366
|
batching rule for `aten::scatter_add_`
|
ZhongkuiMa
|
open
|
[
"triaged",
"enhancement",
"module: vmap",
"module: functorch"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
Hi Guys,
I'm a PhD student and working on a Pytorch project. Currently, I encountered the following warning.
> UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::scatter_add_. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:81.)
It happens when I implement a `vmap` on a function containing `scatter_add_`.
In fact, I need operations on a very large tensor (maybe 10~40GB). So I have to use `vmap` to save memory but remain efficient by tensor operations.
This is a very common feature and similar operations to `scatter` operations may have existed.
All in all, I hope this feature can be implemented with a priority.
### Alternatives
Currently, I just ignore the warning.
### Additional context
Thanks for the PyTorch team's hard work.
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,890,181,352
|
torch.vmap incompatibility with DLPack functions
|
BillHuang2001
|
open
|
[
"triaged",
"module: vmap",
"module: dlpack",
"module: functorch"
] | 8
|
NONE
|
### 🐛 Describe the bug
Currently, torch.vmap does not work with DLPack functions, although it is expected to. When attempting to use torch.vmap with DLPack interop (e.g., between PyTorch and Numpy / JAX), the operation fails.
I tested this behavior with Numpy on CPU and with JAX on both CPU and GPU and none of the configuration works.
```python
import torch
import numpy as np
def foo(x):
np_x = np.from_dlpack(x.detach())
s = np_x + 1
return torch.utils.dlpack.from_dlpack(s)
x = torch.arange(20).view(2, 10)
torch.vmap(foo)(x) # Bug: cannot use vmap with dlpack
```
Stacktrace:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[1], line 10
7 return torch.utils.dlpack.from_dlpack(s)
9 x = torch.arange(20).view(2, 10)
---> 10 torch.vmap(foo)(x) # Bug: cannot use vmap with dlpack
File ~/-/.venv/lib/python3.13/site-packages/torch/_functorch/apis.py:203, in vmap.<locals>.wrapped(*args, **kwargs)
202 def wrapped(*args, **kwargs):
--> 203 return vmap_impl(
204 func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs
205 )
File ~/-/.venv/lib/python3.13/site-packages/torch/_functorch/vmap.py:331, in vmap_impl(func, in_dims, out_dims, randomness, chunk_size, *args, **kwargs)
320 return _chunked_vmap(
321 func,
322 flat_in_dims,
(...)
327 **kwargs,
328 )
330 # If chunk_size is not specified.
--> 331 return _flat_vmap(
332 func,
333 batch_size,
334 flat_in_dims,
335 flat_args,
336 args_spec,
337 out_dims,
338 randomness,
339 **kwargs,
340 )
File ~/-/.venv/lib/python3.13/site-packages/torch/_functorch/vmap.py:479, in _flat_vmap(func, batch_size, flat_in_dims, flat_args, args_spec, out_dims, randomness, **kwargs)
475 with vmap_increment_nesting(batch_size, randomness) as vmap_level:
476 batched_inputs = _create_batched_inputs(
477 flat_in_dims, flat_args, vmap_level, args_spec
478 )
--> 479 batched_outputs = func(*batched_inputs, **kwargs)
480 return _unwrap_batched(batched_outputs, out_dims, vmap_level, batch_size, func)
Cell In[1], line 5, in foo(x)
4 def foo(x):
----> 5 np_x = np.from_dlpack(x.detach())
6 s = np_x + 1
7 return torch.utils.dlpack.from_dlpack(s)
File ~/-/.venv/lib/python3.13/site-packages/torch/_tensor.py:1724, in Tensor.__dlpack__(self, stream)
1720 raise RuntimeError(
1721 "Can't export to dlpack an XLA tensor that is not on CUDA."
1722 )
1723 return xla_dlpack.to_dlpack(self)
-> 1724 return torch.to_dlpack(self)
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.13.2 (main, Feb 5 2025, 19:11:32) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-6.1.0-25-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
CPU(s) scaling MHz: 34%
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 108 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,890,140,846
|
Reland: [inductor] Simplify grid handling
|
jansel
|
closed
|
[
"module: rocm",
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"skip-pr-sanity-checks",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ci-no-td",
"ciflow/inductor-rocm"
] | 29
|
CONTRIBUTOR
|
Summary:
Relands D69965761 / https://github.com/pytorch/pytorch/pull/147583
Before this PR, calling a triton kernel would look like:
```py
kernel.run(a, b, xnumel, grid=grid(xnumel), stream=stream0)
```
where the `grid=` was passed as a callable (function closure) arg. This PR removes the grid arg:
```py
kernel.run(a, b, xnumel, stream=stream0)
```
instead now the grid computation is included in the kernel launcher, with something like:
```py
def launcher(in_ptr0, out_ptr0, xnumel, stream):
grid_0 = ((xnumel + 1023) >> 10)
grid_1 = 1
grid_2 = 1
runner(grid_0, grid_1, grid_2, stream, function, metadata, None, launch_enter_hook, launch_exit_hook, in_ptr0, out_ptr0, xnumel)
```
This should be faster, since we remove multiple function/dict calls and are able to specialize the grid computation for each `triton.Config`.
It also allows us to unify the handling of grids between the Python and C++ wrapper code. Before this, C++ wrapper code didn't actually support dynamic grid sizes and instead burned in a static grid.
This unification allows this PR to be a net deletion of code.
Differential [disconnected] Revision: D70471332
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,890,094,891
|
Optimize param `prepend` class reference `torch.nn.Module`
|
zeshengzong
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Fixes #147696
## Changes
Change `prepend` description `torch.nn.modules.Module` to `torch.nn.Module`
## Test Result
### Before

### After

cc @mikaylagawarecki
| true
|
2,890,028,288
|
Better log message to update pr_time_benchmarks/expected_results.csv
|
jansel
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148292
* #148288
* #148261
* #148260
* #148243
* __->__ #148303
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,890,011,725
|
broadcast_object_list not release GPU
|
psc0606
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
This is a very simple code snippet. The master process broadcasts parameters to worker processes through a manual Gradio click callback. After the worker processes have received the parameters, the GPU utilization occupied by workers doesn't get released (showing 100% utilization constantly). What's happening?
```
import os
import torch
import torch.distributed as dist
import gradio as gr
from threading import Thread
import time
def init_dist():
rank = int(os.getenv("RANK", 0))
world_size = int(os.getenv("WORLD_SIZE", 1))
local_rank = int(os.getenv("LOCAL_RANK", 0))
torch.cuda.set_device(local_rank)
dist.init_process_group(
backend="nccl",
init_method="env://",
rank=rank,
world_size=world_size,)
return dist.get_rank(), dist.get_world_size()
def worker_loop():
while True:
obj = [None]
if dist.is_available():
print("Worker receiving parameter...")
dist.broadcast_object_list(obj, src=0)
print("Worker received parameter.")
params = obj[0]
print(f"Worker received: {params}")
dist.barrier()
def master_interface():
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
prompt = gr.Textbox(label="Input")
steps = gr.Slider(1, 100, value=20, label="steps")
seed = gr.Number(42, label="random seed")
submit = gr.Button("Broadcast Parameter")
output = gr.Textbox(label="In Broadcasting.")
def broadcast_params(prompt, steps, seed):
params = {
"prompt": prompt,
"steps": int(steps),
"seed": int(seed)
}
dist.broadcast_object_list([params], src=0)
return f"Broacasted Parameter:{params}"
submit.click(
fn=broadcast_params,
inputs=[prompt, steps, seed],
outputs=output
)
demo.launch(server_name="0.0.0.0", server_port=7900)
if __name__ == "__main__":
rank, world_size = init_dist()
if rank == 0:
print("Master starting gradio")
master_interface()
else:
print(f"Worker [{rank}] starting received...")
worker_loop()
```

### Versions
gradio 5.18.0
gradio_client 1.7.2
pytorch 2.6.0
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-nccl-cu12 2.21.5
Reproduced by bellow command:
`shell
CUDA_VISIBLE_DEVICES=2,5,6,7 torchrun --nproc_per_node=4 --master_port 29504 simple.py
`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,890,009,812
|
```torch.as_strided``` negative stride SIGSEV fix when using ```torch.compile```
|
AmalDevHaridevan
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4
|
NONE
|
Fixes #147100
```meta``` tensors (and by extension FakeTensors) are immune to ```TORCH_CHECK```. This was found by tracing the execution of ```_dispatch_impl ``` method of ``` FakeTensorMode ``` context manager.
This PR introduces special handling (just) for empty tensors, so that ```Dynamo``` can compile modules and functions properly.
```meta``` tensors slip through the ```TORCH_CHECK``` assertions,
which if supposed to trigger RuntimeErrors, will cause undefined behavior as seen in
issue #147100. For example, consider calling ```torch.as_strided``` on an empty tensor,
normal tensors (i.e ```device != meta```) will trigger the ```TORCH_CHECK``` assertions,
but the ```meta``` ones do not. This causes the ```module``` or ```func``` to be compiled by ```dynamo```,
and later produce undefined behavior.
The actual reason for ```SEGFAULT``` can be found here: (https://pytorch.org/docs/stable/torch.compiler_fake_tensor.html)
which is CPU kernel getting called with a fake tensor trying to dereference the data pointer.
# Solution
Simplest solution is to do a simple check in ```_dispatch_impl ```, only for empty tensors (currently known edge case) and then perform real tensor propagation.
1. Check if we have any ```Empty Tensor``` in the ```flat_arg_fake_tensors```, flattened by ```pytree```
2. If we do have empty tensors, to ensure that we do propagation every tensor shall have a ```real_tensor```. I found that dynamically modifying the ```FakeTensorMode.propagate_real_tensors``` doesn't ensure this. However, if we do set this attribute to ```True``` always (regardless if we have empty tensor or not), then we obtain desired behavior. But this means there is no flexibility. Therefore, we iterate over the fake tensors and set their ```real_tensor``` according to the ```shape``` and ```dtype``` information.
3. Store the previous state of ```FakeTensorMode.propagate_real_tensors``` and set it to ```True``` based on our edge case check
4. After propagation restore the state of ```FakeTensorMode.propagate_real_tensors```
# Reproduction Code
```python
import torch
@torch.compile
def f(*args):
sym_0, sym_1, sym_2, sym_3 = args
var_374 = torch.tril_indices(row=sym_0, col=sym_1, offset=0)
var_483 = torch.as_strided(var_374, size=sym_2, stride=sym_3, storage_offset=None)
return var_483 + 1.
def f2(*args):
sym_0, sym_1, sym_2, sym_3 = args
var_374 = torch.tril_indices(row=sym_0, col=sym_1, offset=0)
var_483 = torch.as_strided(var_374, size=sym_2, stride=sym_3, storage_offset=None)
return var_483 + 1.
print("="*80)
print(f"Executing Non-compiled f (ARGS: {(751, 0, (1,), (-1,))})")
try:
res = f2(751, 0, (1,), (-1,))
except RuntimeError as exc:
print("\t Got RuntimeError: ")
print("\t",exc)
print("="*80)
print(f"Executing Compiled f (ARGS: {(751, 0, (1,), (-1,))})")
res = f(751, 0, (1,), (-1,))
print("\t", res)
```
## Before Fix
```
================================================================================
Executing Non-compiled f (ARGS: (751, 0, (1,), (-1,)))
Got RuntimeError:
as_strided: Negative strides are not supported at the moment, got strides: [-1]
================================================================================
Executing Compiled f (ARGS: (751, 0, (1,), (-1,)))
Segmentation fault (core dumped)
```
## After Fix
```
================================================================================
Executing Non-compiled f (ARGS: (751, 0, (1,), (-1,)))
Got RuntimeError:
as_strided: Negative strides are not supported at the moment, got strides: [-1]
================================================================================
Executing Compiled f (ARGS: (751, 0, (1,), (-1,)))
Traceback (most recent call last):
File "/home/harid/pytorch/../test.py", line 28, in <module>
res = f(751, 0, (1,), (-1,))
^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/eval_frame.py", line 586, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/convert_frame.py", line 1422, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/convert_frame.py", line 1203, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/convert_frame.py", line 594, in __call__
return _compile(
^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/convert_frame.py", line 1053, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/convert_frame.py", line 755, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/convert_frame.py", line 791, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/bytecode_transformation.py", line 1418, in transform_code_object
transformations(instructions, code_options)
File "/home/harid/pytorch/torch/_dynamo/convert_frame.py", line 256, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/convert_frame.py", line 709, in transform
tracer.run()
File "/home/harid/pytorch/torch/_dynamo/symbolic_convert.py", line 3305, in run
super().run()
File "/home/harid/pytorch/torch/_dynamo/symbolic_convert.py", line 1216, in run
while self.step():
^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/symbolic_convert.py", line 1126, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/harid/pytorch/torch/_dynamo/symbolic_convert.py", line 794, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/symbolic_convert.py", line 2753, in CALL
self._call(inst)
File "/home/harid/pytorch/torch/_dynamo/symbolic_convert.py", line 2747, in _call
self.call_function(fn, args, kwargs)
File "/home/harid/pytorch/torch/_dynamo/symbolic_convert.py", line 1050, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/variables/torch.py", line 1160, in call_function
tensor_variable = wrap_fx_proxy(
^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/variables/builder.py", line 2284, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/variables/builder.py", line 2350, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/variables/builder.py", line 2446, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/utils.py", line 3205, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/home/harid/pytorch/torch/_dynamo/utils.py", line 3103, in get_fake_value
ret_val = wrap_fake_exception(
^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/utils.py", line 2617, in wrap_fake_exception
return fn()
^^^^
File "/home/harid/pytorch/torch/_dynamo/utils.py", line 3104, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_dynamo/utils.py", line 3301, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/home/harid/pytorch/torch/_dynamo/utils.py", line 3260, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_subclasses/fake_tensor.py", line 1282, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_subclasses/fake_tensor.py", line 1823, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_subclasses/fake_tensor.py", line 1384, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_subclasses/fake_tensor.py", line 2236, in _dispatch_impl
real_out = func(*real_args, **real_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/harid/pytorch/torch/_ops.py", line 756, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <built-in method as_strided of type object at 0x7fd8a9aecc00>(*(FakeTensor(..., size=(2, 0), dtype=torch.int64),), **{'size': (1,), 'stride': (-1,), 'storage_offset': None}): got RuntimeError('as_strided: Negative strides are not supported at the moment, got strides: [-1]')
from user code:
File "/home/harid/pytorch/../test.py", line 8, in f
var_483 = torch.as_strided(var_374, size=sym_2, stride=sym_3, storage_offset=None)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
| true
|
2,889,968,404
|
`torch.Tensor.pinverse` can cause an `INTERNAL ASSERT FAILED`
|
cybersupersoap
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
A `INTERNAL ASSERT FAILED` will be raised when using `torch.Tensor.pinverse`
```python
import torch
_input_tensor = torch.rand(2**31, 3)
_output_tensor = torch.Tensor.pinverse(_input_tensor)
print('Input tensor: ', _input_tensor)
print('Output tensor: ', _output_tensor)
```
Error message:
```
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. linalg.svd: Argument 2 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @wenzhe-nrv
| true
|
2,889,965,919
|
`torch.linalg.cond` can cause an `INTERNAL ASSERT FAILED`
|
cybersupersoap
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
A `INTERNAL ASSERT FAILED` will be raised when using `torch.linalg.cond`
```python
import torch
A = torch.rand(2**31, 3)
cond_A = torch.linalg.cond(A)
print('cond_A = ', cond_A)
```
Error message:
```
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. linalg.svd: Argument 2 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @wenzhe-nrv
| true
|
2,889,963,878
|
`torch.linalg.pinv` can cause an `INTERNAL ASSERT FAILED`
|
cybersupersoap
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
A `INTERNAL ASSERT FAILED` will be raised when using torch.linalg.pinv
```python
import torch
import torch
A = torch.randn((2**31, 3))
A_inv = torch.linalg.pinv(A)
```
Error message:
```
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1604, please report a bug to PyTorch. linalg.svd: Argument 2 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
The error is reproducible with the nightly-build version 2.7.0.dev20250208+cpu .
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @wenzhe-nrv
| true
|
2,889,961,531
|
`torch.nansum` can cause a `Segmentation fault (core dumped)`
|
cybersupersoap
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
A `Segmentation fault` will be raised when using torch.nansum
```python
import torch
import numpy as np
arg_1_tensor = torch.rand([2, 2], dtype=torch.float32)
arg_1 = arg_1_tensor.clone()
arg_2 = np.array(0)
res = torch.nansum(arg_1, dim=arg_2)
```
Error message:
```
Segmentation fault (core dumped)
```
The error is reproducible with the nightly-build version 2.7.0.dev20250208+cpu
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
| true
|
2,889,919,103
|
DISABLED test_int_shape_binops (__main__.MiscTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 4
|
NONE
|
Platforms: asan, linux, mac, macos, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_int_shape_binops&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38064942957).
Over the past 3 hours, it has been determined flaky in 22 workflow(s) with 44 failures and 22 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_int_shape_binops`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_misc.py", line 429, in test_int_shape_binops
torch._dynamo.testing.standard_test(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self, fn, 1, expected_ops=1, expected_ops_dynamic=ifdynstaticdefault(1, 9)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/testing.py", line 367, in standard_test
self.assertEqual(actual.op_count, expected_ops)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 4092, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
...<4 lines>...
)
AssertionError: Scalars are not equal!
Expected 1 but got 9.
Absolute difference: 8
Relative difference: 8.0
To execute this test, run the following from the base repo dir:
python test/dynamo/test_misc.py MiscTests.test_int_shape_binops
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_misc.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,889,918,544
|
DISABLED test_dont_aggressively_write_assert_dynamic_shapes (__main__.DynamicShapesReproTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: asan, linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dont_aggressively_write_assert_dynamic_shapes&suite=DynamicShapesReproTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38064786342).
Over the past 3 hours, it has been determined flaky in 19 workflow(s) with 38 failures and 19 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_dont_aggressively_write_assert_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_repros.py", line 4748, in test_dont_aggressively_write_assert
self.assertTrue(lower_ranges == [4, 2])
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesReproTests.test_dont_aggressively_write_assert_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,889,917,934
|
Treat CUDA warnings as errors
|
cyyever
|
open
|
[
"oncall: distributed",
"open source",
"Stale",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,889,910,605
|
Upgrade submodule oneDNN to v3.7.1
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ci-no-td",
"ciflow/linux-aarch64"
] | 8
|
COLLABORATOR
|
This PR is to upgrade submodule oneDNN to v3.7.1.
## Improvements
- Improved performance of convolution and matmul primitives on Intel Xeon processors with Intel AMX instruction set support (formerly Sapphire Rapids and Granite Rapids).
- Improved performance of int8 and fp32 forward convolution primitive on processors with Intel AVX2 instruction set support.
- Improved performance of fp8 matmul primitives with bf16 and fp16 bias data type on Intel Xeon processors with Intel AMX instruction set support (formerly Sapphire Rapids and Granite Rapids).
- Introduced initial optimizations for Intel GPUs based on Xe3 architecture.
- Added bfloat16 support for SDPA, implemented fp16 and bf16 gemm kernel in SDPA.
- Fixed f16 matmul accuracy, the issue of SDPA cannot dispatched to ukernel, bf16/fp16/fp32 conv performance, INT8 Kernel trigger page fault, deconvolution precision issue on complex128 and fp64 and gemm correctness issue in float16 issues.
- Improved bf16 matmul performance with fp32 destination with Arm Compute Library (ACL).
- Improved bf16 to fp32 reorder performance.
- Improved bf16 reorder performance.
- Improved bf16 convolution with ACL.
Fixes https://github.com/pytorch/pytorch/issues/136348.
## Validation results on CPU
1. NLP models accuracy/inference/training


2. Torchbench cpu userbenchmark inference & training

3. Inductor quantization

4. Dynamo benchmarks








## Validation results on XPU
Accuracy is same as baseline. Performance is shown below.

## Validation results on ARM


cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,889,875,266
|
[fx] Optimize TracerBase.create_arg and Graph._gen_python_code
|
jansel
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148292
* #148288
* #148261
* #148260
* #148243
Before: 19502951 function calls (18702776 primitive calls) in 8.533 seconds
After: 16402551 function calls (15602452 primitive calls) in 7.701 seconds
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,889,870,254
|
Fix extra semicolon warning
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,889,866,526
|
[PoC] Add RECORD_FUNCTION calls for aoti shim layer wrappers
|
sanchitintel
|
closed
|
[
"open source",
"ciflow/trunk",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Ports @chunyuan-w's [work](https://github.com/pytorch/pytorch/commit/734f940f527a53bde1334b8a8819062c78029f2f#diff-b60511be1e7fafc2c45e7c0cb3e769ad48b2a1060a69759f58979ffc33b38a79) to main branch to ensure some ops would appear in Inductor-CPU profiling results.
SDPA op doesn't appear in profiling results with this small example because it's being decomposed even for CPU (doesn't seem intentional, so it should probably be investigated orthogonally) -
<details>
```python
import torch
import torch.nn.functional as F
from torch.profiler import profile, record_function, ProfilerActivity
from torch._inductor import config as inductor_config
inductor_config.profiler_mark_wrapper_call = True
inductor_config.cpp.enable_kernel_profile = True
def attention_block(query, key, value, dropout_p=0.0):
attn_output = F.scaled_dot_product_attention(
query, key, value,
dropout_p=dropout_p
)
return attn_output
class SimpleAttentionModule(torch.nn.Module):
def __init__(self, embed_dim):
super().__init__()
self.linear_q = torch.nn.Linear(embed_dim, embed_dim, dtype=torch.bfloat16)
self.linear_k = torch.nn.Linear(embed_dim, embed_dim, dtype=torch.bfloat16)
self.linear_v = torch.nn.Linear(embed_dim, embed_dim, dtype=torch.bfloat16)
def forward(self, x):
# Project x to query, key, and value
query = self.linear_q(x)
key = self.linear_k(x)
value = self.linear_v(x)
return attention_block(query, key, value)
model = SimpleAttentionModule(embed_dim=4096)
x = torch.randn(64, 4096, dtype=torch.bfloat16)
with torch.no_grad():
# Run a forward pass on the compiled model
compiled_model = torch.compile(model)
output = compiled_model(x)
with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
compiled_model(x)
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=100))
```
</details>
But if a group of ops are pattern-matched to the SDPA op in Inductor, then SDPA op should be visible in Inductor profiling results with this patch.
| true
|
2,889,840,486
|
Triton Kernel Rejects NamedTupleVariable Arguments
|
cora-codes
|
open
|
[
"triaged",
"module: fx",
"oncall: pt2",
"module: dynamo",
"module: user triton"
] | 9
|
NONE
|
### 🚀 The feature, motivation and pitch
PyTorch's TorchDynamo fails when passing NamedTupleVariable to Triton kernels, raising "Unexpected argument type for a Triton kernel". It would be nice to support named tuple arguments since it makes writing Triton kernels far cleaner.
```python
import torch
import typing
import triton
from torch.profiler import profile, record_function, ProfilerActivity
class T1(typing.NamedTuple):
foo: None = None
bar: None = None
class T2(typing.NamedTuple):
foo: T1 = T1()
bar: T1 = T1()
class T3(typing.NamedTuple):
foo: T2 = T2()
bar: T2 = T2()
class T4(typing.NamedTuple):
foo: T3 = T3()
bar: T3 = T3()
class T5(typing.NamedTuple):
foo: T4 = T4()
bar: T4 = T4()
@triton.jit
def test(t5: T5):
pass
if __name__ == "__main__":
t5 = T5()
@torch.compile(mode="max-autotune-no-cudagraphs", fullgraph=True)
def main():
for i in range(100):
test[(1,)](t5)
main()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames @oulgen @aakhundov @davidberard98
| true
|
2,889,840,116
|
[fx] Optimizations for node name generation
|
jansel
|
closed
|
[
"Merged",
"Reverted",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148292
* __->__ #148288
* #148261
* #148260
* #148243
Before:

After:

cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,889,833,358
|
[MPS] add slogdet and logdet implementations to mps
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 15
|
COLLABORATOR
|
Low hanging fruits, all ops for these are implemented so just adding them to native functions adds the functionality on mps. Probably next op I should add should be lu solve seeing as how many ops need it for the grad calculation
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,889,814,194
|
[BE][MPS] Use `copysign` for imaginary part of sqrt
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148286
* #148285
Also it's tempting trying to replace `a*a + b*b` with `dot(input[index])` but for some reason it results in a slightly different output
| true
|
2,889,800,275
|
[MPS] Fix sqrt and other for `torch.chalf`
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148286
* __->__ #148285
Those kernels, instead of being instantiated for half2 (which corresponds to ComplexHalf) were instnatiated for short2, which resuled in the following test
```
% python3 -c "import torch; print(torch.rand(6, device='mps', dtype=torch.chalf).sqrt())"
```
Fail with
```
RuntimeError: Failed to create function state object for: sqrt_complex_half_half
```
As sqrt is not implemented for CPU, add explicit test to `test_sqrt`
| true
|
2,889,677,479
|
[BE] Fix extra semicolon warning
|
malfet
|
closed
|
[
"module: cpu",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Introduced by https://github.com/pytorch/pytorch/pull/146596
I.e. while building locally my log was littered with
```
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/native/LossNLL2d.cpp:5:
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/native/cpu/utils.h:5:
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec.h:7:
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256.h:15:
/Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256_half.h:228:42: warning: extra ';' outside of a function is incompatible with C++98 [-Wc++98-compat-extra-semi]
228 | LOAD_FP32_NON_VECTORIZED_INIT(Half, fp16);
| ^
2 warnings generated.
[230/1017] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/LossNLL.cpp.o
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/native/LossNLL.cpp:9:
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/native/cpu/utils.h:5:
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec.h:7:
In file included from /Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256.h:14:
/Users/malfet/git/pytorch/pytorch/aten/src/ATen/cpu/vec/vec256/vec256_bfloat16.h:228:46: warning: extra ';' outside of a function is incompatible with C++98 [-Wc++98-compat-extra-semi]
228 | LOAD_FP32_NON_VECTORIZED_INIT(BFloat16, bf16);
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,889,469,310
|
RuntimeError: use_libuv was requested but PyTorch was build without libuv support
|
jiangxinufo
|
open
|
[
"oncall: distributed",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
RuntimeError: use_libuv was requested but PyTorch was build without libuv support
(llama_factory) PS F:\jx\LLaMA-Factory> llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
[INFO|2025-03-02 18:38:41] llamafactory.cli:157 >> Initializing distributed tasks at: 127.0.0.1:27838
W0302 18:38:59.745000 3776 site-packages\torch\distributed\elastic\multiprocessing\redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.
Traceback (most recent call last):
File "F:\CondaData\envs\llama_factory\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "F:\CondaData\envs\llama_factory\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "F:\CondaData\envs\llama_factory\Scripts\torchrun.exe\__main__.py", line 7, in <module>
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\multiprocessing\errors\__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\run.py", line 919, in main
run(args)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\run.py", line 910, in run
elastic_launch(
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\launcher\api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\launcher\api.py", line 260, in launch_agent
result = agent.run()
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\metrics\api.py", line 137, in wrapper
result = f(*args, **kwargs)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 696, in run
result = self._invoke_run(role)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 849, in _invoke_run
self._initialize_workers(self._worker_group)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\metrics\api.py", line 137, in wrapper
result = f(*args, **kwargs)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 668, in _initialize_workers
self._rendezvous(worker_group)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\metrics\api.py", line 137, in wrapper
result = f(*args, **kwargs)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 500, in _rendezvous
rdzv_info = spec.rdzv_handler.next_rendezvous()
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\rendezvous\static_tcp_rendezvous.py", line 67, in next_rendezvous
self._store = TCPStore( # type: ignore[call-arg]
RuntimeError: use_libuv was requested but PyTorch was build without libuv support
libuv 库我已经安装了,但是还是报错。有人建议我自己编译pytorch,使它支持libuv,有没有其他的解决办法呢???
(llama_factory) PS F:\jx\LLaMA-Factory> conda list
# packages in environment at F:\CondaData\envs\llama_factory:
#
# Name Version Build Channel
accelerate 1.1.1 pypi_0 pypi
aiofiles 23.2.1 pypi_0 pypi
aiohappyeyeballs 2.4.6 pypi_0 pypi
aiohttp 3.11.12 pypi_0 pypi
aiosignal 1.3.2 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
anyio 4.8.0 pypi_0 pypi
async-timeout 5.0.1 pypi_0 pypi
attrs 25.1.0 pypi_0 pypi
audioread 3.0.1 pypi_0 pypi
av 14.1.0 pypi_0 pypi
bitsandbytes 0.44.0 pypi_0 pypi
bzip2 1.0.8 h2466b09_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
ca-certificates 2025.1.31 h56e8100_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
certifi 2025.1.31 pypi_0 pypi
cffi 1.17.1 pypi_0 pypi
charset-normalizer 3.4.1 pypi_0 pypi
click 8.1.8 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
contourpy 1.3.1 pypi_0 pypi
cycler 0.12.1 pypi_0 pypi
datasets 3.2.0 pypi_0 pypi
decorator 5.1.1 pypi_0 pypi
dill 0.3.8 pypi_0 pypi
docstring-parser 0.16 pypi_0 pypi
einops 0.8.1 pypi_0 pypi
exceptiongroup 1.2.2 pypi_0 pypi
fastapi 0.115.8 pypi_0 pypi
ffmpy 0.5.0 pypi_0 pypi
filelock 3.17.0 pypi_0 pypi
fire 0.7.0 pypi_0 pypi
fonttools 4.56.0 pypi_0 pypi
frozenlist 1.5.0 pypi_0 pypi
fsspec 2024.9.0 pypi_0 pypi
gradio 5.12.0 pypi_0 pypi
gradio-client 1.5.4 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
httpcore 1.0.7 pypi_0 pypi
httpx 0.28.1 pypi_0 pypi
huggingface-hub 0.29.1 pypi_0 pypi
idna 3.10 pypi_0 pypi
intel-openmp 2021.4.0 pypi_0 pypi
jieba 0.42.1 pypi_0 pypi
jinja2 3.1.5 pypi_0 pypi
joblib 1.4.2 pypi_0 pypi
kiwisolver 1.4.8 pypi_0 pypi
lazy-loader 0.4 pypi_0 pypi
libffi 3.4.6 h537db12_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
liblzma 5.6.4 h2466b09_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
librosa 0.10.2.post1 pypi_0 pypi
libsqlite 3.48.0 h67fdade_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libuv 1.50.0 h2466b09_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libzlib 1.3.1 h2466b09_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
llamafactory 0.9.2.dev0 pypi_0 pypi
llvmlite 0.44.0 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 2.1.5 pypi_0 pypi
matplotlib 3.10.0 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
mkl 2021.4.0 pypi_0 pypi
modelscope 1.23.0 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
msgpack 1.1.0 pypi_0 pypi
multidict 6.1.0 pypi_0 pypi
multiprocess 0.70.16 pypi_0 pypi
networkx 3.4.2 pypi_0 pypi
nltk 3.9.1 pypi_0 pypi
numba 0.61.0 pypi_0 pypi
numpy 1.26.4 pypi_0 pypi
openssl 3.4.1 ha4e3fda_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
optimum 1.24.0 pypi_0 pypi
orjson 3.10.15 pypi_0 pypi
packaging 24.2 pypi_0 pypi
pandas 2.2.3 pypi_0 pypi
peft 0.12.0 pypi_0 pypi
pillow 11.1.0 pypi_0 pypi
pip 25.0.1 pyh8b19718_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
platformdirs 4.3.6 pypi_0 pypi
pooch 1.8.2 pypi_0 pypi
propcache 0.2.1 pypi_0 pypi
protobuf 5.29.3 pypi_0 pypi
psutil 7.0.0 pypi_0 pypi
pyarrow 19.0.0 pypi_0 pypi
pycparser 2.22 pypi_0 pypi
pydantic 2.10.6 pypi_0 pypi
pydantic-core 2.27.2 pypi_0 pypi
pydub 0.25.1 pypi_0 pypi
pygments 2.19.1 pypi_0 pypi
pyparsing 3.2.1 pypi_0 pypi
python 3.10.16 h37870fc_1_cpython https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
python-dateutil 2.9.0.post0 pypi_0 pypi
python-multipart 0.0.20 pypi_0 pypi
pytz 2025.1 pypi_0 pypi
pyyaml 6.0.2 pypi_0 pypi
regex 2024.11.6 pypi_0 pypi
requests 2.32.3 pypi_0 pypi
rich 13.9.4 pypi_0 pypi
rouge-chinese 1.0.3 pypi_0 pypi
ruff 0.9.6 pypi_0 pypi
safehttpx 0.1.6 pypi_0 pypi
safetensors 0.5.2 pypi_0 pypi
scikit-learn 1.6.1 pypi_0 pypi
scipy 1.15.2 pypi_0 pypi
semantic-version 2.10.0 pypi_0 pypi
sentencepiece 0.2.0 pypi_0 pypi
setuptools 75.8.0 pyhff2d567_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
shellingham 1.5.4 pypi_0 pypi
shtab 1.7.1 pypi_0 pypi
six 1.17.0 pypi_0 pypi
sniffio 1.3.1 pypi_0 pypi
soundfile 0.13.1 pypi_0 pypi
soxr 0.5.0.post1 pypi_0 pypi
sse-starlette 2.2.1 pypi_0 pypi
starlette 0.45.3 pypi_0 pypi
sympy 1.13.1 pypi_0 pypi
tbb 2021.13.1 pypi_0 pypi
termcolor 2.5.0 pypi_0 pypi
threadpoolctl 3.5.0 pypi_0 pypi
tiktoken 0.9.0 pypi_0 pypi
tk 8.6.13 h5226925_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
tokenizers 0.21.0 pypi_0 pypi
tomlkit 0.13.2 pypi_0 pypi
torch 2.5.1+cu121 pypi_0 pypi
torchvision 0.20.1+cu121 pypi_0 pypi
tqdm 4.67.1 pypi_0 pypi
transformers 4.49.0 pypi_0 pypi
trl 0.9.6 pypi_0 pypi
typer 0.15.1 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
tyro 0.8.14 pypi_0 pypi
tzdata 2025.1 pypi_0 pypi
ucrt 10.0.22621.0 h57928b3_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
urllib3 2.3.0 pypi_0 pypi
uvicorn 0.34.0 pypi_0 pypi
vc 14.3 h5fd82a7_24 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
vc14_runtime 14.42.34433 h6356254_24 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
websockets 14.2 pypi_0 pypi
wheel 0.45.1 pyhd8ed1ab_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xxhash 3.5.0 pypi_0 pypi
yarl 1.18.3 pypi_0 pypi
### Versions
RuntimeError: use_libuv was requested but PyTorch was build without libuv support
(llama_factory) PS F:\jx\LLaMA-Factory> llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
[INFO|2025-03-02 18:38:41] llamafactory.cli:157 >> Initializing distributed tasks at: 127.0.0.1:27838
W0302 18:38:59.745000 3776 site-packages\torch\distributed\elastic\multiprocessing\redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs.
Traceback (most recent call last):
File "F:\CondaData\envs\llama_factory\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "F:\CondaData\envs\llama_factory\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "F:\CondaData\envs\llama_factory\Scripts\torchrun.exe\__main__.py", line 7, in <module>
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\multiprocessing\errors\__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\run.py", line 919, in main
run(args)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\run.py", line 910, in run
elastic_launch(
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\launcher\api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\launcher\api.py", line 260, in launch_agent
result = agent.run()
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\metrics\api.py", line 137, in wrapper
result = f(*args, **kwargs)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 696, in run
result = self._invoke_run(role)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 849, in _invoke_run
self._initialize_workers(self._worker_group)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\metrics\api.py", line 137, in wrapper
result = f(*args, **kwargs)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 668, in _initialize_workers
self._rendezvous(worker_group)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\metrics\api.py", line 137, in wrapper
result = f(*args, **kwargs)
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 500, in _rendezvous
rdzv_info = spec.rdzv_handler.next_rendezvous()
File "F:\CondaData\envs\llama_factory\lib\site-packages\torch\distributed\elastic\rendezvous\static_tcp_rendezvous.py", line 67, in next_rendezvous
self._store = TCPStore( # type: ignore[call-arg]
RuntimeError: use_libuv was requested but PyTorch was build without libuv support
libuv 库我已经安装了,但是还是报错。有人建议我自己编译pytorch,使它支持libuv,有没有其他的解决办法呢???
(llama_factory) PS F:\jx\LLaMA-Factory> conda list
# packages in environment at F:\CondaData\envs\llama_factory:
#
# Name Version Build Channel
accelerate 1.1.1 pypi_0 pypi
aiofiles 23.2.1 pypi_0 pypi
aiohappyeyeballs 2.4.6 pypi_0 pypi
aiohttp 3.11.12 pypi_0 pypi
aiosignal 1.3.2 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
anyio 4.8.0 pypi_0 pypi
async-timeout 5.0.1 pypi_0 pypi
attrs 25.1.0 pypi_0 pypi
audioread 3.0.1 pypi_0 pypi
av 14.1.0 pypi_0 pypi
bitsandbytes 0.44.0 pypi_0 pypi
bzip2 1.0.8 h2466b09_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
ca-certificates 2025.1.31 h56e8100_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
certifi 2025.1.31 pypi_0 pypi
cffi 1.17.1 pypi_0 pypi
charset-normalizer 3.4.1 pypi_0 pypi
click 8.1.8 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
contourpy 1.3.1 pypi_0 pypi
cycler 0.12.1 pypi_0 pypi
datasets 3.2.0 pypi_0 pypi
decorator 5.1.1 pypi_0 pypi
dill 0.3.8 pypi_0 pypi
docstring-parser 0.16 pypi_0 pypi
einops 0.8.1 pypi_0 pypi
exceptiongroup 1.2.2 pypi_0 pypi
fastapi 0.115.8 pypi_0 pypi
ffmpy 0.5.0 pypi_0 pypi
filelock 3.17.0 pypi_0 pypi
fire 0.7.0 pypi_0 pypi
fonttools 4.56.0 pypi_0 pypi
frozenlist 1.5.0 pypi_0 pypi
fsspec 2024.9.0 pypi_0 pypi
gradio 5.12.0 pypi_0 pypi
gradio-client 1.5.4 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
httpcore 1.0.7 pypi_0 pypi
httpx 0.28.1 pypi_0 pypi
huggingface-hub 0.29.1 pypi_0 pypi
idna 3.10 pypi_0 pypi
intel-openmp 2021.4.0 pypi_0 pypi
jieba 0.42.1 pypi_0 pypi
jinja2 3.1.5 pypi_0 pypi
joblib 1.4.2 pypi_0 pypi
kiwisolver 1.4.8 pypi_0 pypi
lazy-loader 0.4 pypi_0 pypi
libffi 3.4.6 h537db12_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
liblzma 5.6.4 h2466b09_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
librosa 0.10.2.post1 pypi_0 pypi
libsqlite 3.48.0 h67fdade_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libuv 1.50.0 h2466b09_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libzlib 1.3.1 h2466b09_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
llamafactory 0.9.2.dev0 pypi_0 pypi
llvmlite 0.44.0 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 2.1.5 pypi_0 pypi
matplotlib 3.10.0 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
mkl 2021.4.0 pypi_0 pypi
modelscope 1.23.0 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
msgpack 1.1.0 pypi_0 pypi
multidict 6.1.0 pypi_0 pypi
multiprocess 0.70.16 pypi_0 pypi
networkx 3.4.2 pypi_0 pypi
nltk 3.9.1 pypi_0 pypi
numba 0.61.0 pypi_0 pypi
numpy 1.26.4 pypi_0 pypi
openssl 3.4.1 ha4e3fda_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
optimum 1.24.0 pypi_0 pypi
orjson 3.10.15 pypi_0 pypi
packaging 24.2 pypi_0 pypi
pandas 2.2.3 pypi_0 pypi
peft 0.12.0 pypi_0 pypi
pillow 11.1.0 pypi_0 pypi
pip 25.0.1 pyh8b19718_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
platformdirs 4.3.6 pypi_0 pypi
pooch 1.8.2 pypi_0 pypi
propcache 0.2.1 pypi_0 pypi
protobuf 5.29.3 pypi_0 pypi
psutil 7.0.0 pypi_0 pypi
pyarrow 19.0.0 pypi_0 pypi
pycparser 2.22 pypi_0 pypi
pydantic 2.10.6 pypi_0 pypi
pydantic-core 2.27.2 pypi_0 pypi
pydub 0.25.1 pypi_0 pypi
pygments 2.19.1 pypi_0 pypi
pyparsing 3.2.1 pypi_0 pypi
python 3.10.16 h37870fc_1_cpython https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
python-dateutil 2.9.0.post0 pypi_0 pypi
python-multipart 0.0.20 pypi_0 pypi
pytz 2025.1 pypi_0 pypi
pyyaml 6.0.2 pypi_0 pypi
regex 2024.11.6 pypi_0 pypi
requests 2.32.3 pypi_0 pypi
rich 13.9.4 pypi_0 pypi
rouge-chinese 1.0.3 pypi_0 pypi
ruff 0.9.6 pypi_0 pypi
safehttpx 0.1.6 pypi_0 pypi
safetensors 0.5.2 pypi_0 pypi
scikit-learn 1.6.1 pypi_0 pypi
scipy 1.15.2 pypi_0 pypi
semantic-version 2.10.0 pypi_0 pypi
sentencepiece 0.2.0 pypi_0 pypi
setuptools 75.8.0 pyhff2d567_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
shellingham 1.5.4 pypi_0 pypi
shtab 1.7.1 pypi_0 pypi
six 1.17.0 pypi_0 pypi
sniffio 1.3.1 pypi_0 pypi
soundfile 0.13.1 pypi_0 pypi
soxr 0.5.0.post1 pypi_0 pypi
sse-starlette 2.2.1 pypi_0 pypi
starlette 0.45.3 pypi_0 pypi
sympy 1.13.1 pypi_0 pypi
tbb 2021.13.1 pypi_0 pypi
termcolor 2.5.0 pypi_0 pypi
threadpoolctl 3.5.0 pypi_0 pypi
tiktoken 0.9.0 pypi_0 pypi
tk 8.6.13 h5226925_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
tokenizers 0.21.0 pypi_0 pypi
tomlkit 0.13.2 pypi_0 pypi
torch 2.5.1+cu121 pypi_0 pypi
torchvision 0.20.1+cu121 pypi_0 pypi
tqdm 4.67.1 pypi_0 pypi
transformers 4.49.0 pypi_0 pypi
trl 0.9.6 pypi_0 pypi
typer 0.15.1 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
tyro 0.8.14 pypi_0 pypi
tzdata 2025.1 pypi_0 pypi
ucrt 10.0.22621.0 h57928b3_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
urllib3 2.3.0 pypi_0 pypi
uvicorn 0.34.0 pypi_0 pypi
vc 14.3 h5fd82a7_24 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
vc14_runtime 14.42.34433 h6356254_24 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
websockets 14.2 pypi_0 pypi
wheel 0.45.1 pyhd8ed1ab_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xxhash 3.5.0 pypi_0 pypi
yarl 1.18.3 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,889,439,101
|
[fx] reimplement `fx.map_aggregate` with pytree
|
XuehaiPan
|
closed
|
[
"open source",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148282
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,889,409,121
|
[torch] Fix unsafe concurrent access to autocast_enabled
|
t-ivan-gr
|
closed
|
[
"oncall: jit",
"fb-exported",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 16
|
CONTRIBUTOR
|
Summary: Making autocast_enabled atomic, as it can be accessed from multiple threads
Differential Revision: D70456813
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mcarilli @ptrblck @leslie-fang-intel
| true
|
2,889,384,326
|
nn.Matmul return different ret within Parameter and Tensor
|
zhaozheng09
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
```
import ast
import time
import torch
import re
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import os
os.environ['NVIDIA_TF32_OVERRIDE'] = '0'
torch.backends.cuda.matmul.allow_tf32 = False
torch.backends.cudnn.allow_tf32 = False
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
torch.use_deterministic_algorithms(True)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
os.environ['NVIDIA_TF32_OVERRIDE'] = '0'
os.environ['TF_DETERMINISTIC_OPS'] = '1'
os.environ['TF_CUDNN_DETERMINISTIC'] = '1'
class CustomLinear:
def __init__(self) -> None:
self.weight = nn.Parameter(torch.empty((3,3))).to('cuda')
def forward(self, x):
return torch.matmul(x, self.weight);
def Diff(val1, val2):
diff = np.abs(val1.numpy() - val2.numpy())
max_diff = np.max(diff)
sum_diff = np.sum(diff)
print(f"Max difference: {max_diff}")
def matmul_test():
a = torch.tensor([[ 0.0016, 0.0181, 0.0100],
[ 0.0019, 0.0125, 0.0250],
[-0.0221, 0.0075, 0.0211]], dtype=torch.float32).to('cuda')
b = torch.tensor([[ 0.0364, -0.0028, 0.0064],
[ 0.0311, -0.0305, -0.0345],
[-0.0482, 0.0069, -0.0003]], dtype=torch.float32).to('cuda')
with torch.no_grad():
# init a nn.Parameter.
weight = nn.Parameter(torch.empty((3,3))).to('cuda')
weight.copy_(b)
# init a nn.Parameter in class .
mlp = CustomLinear()
mlp.weight.copy_(b.T)
# check input is same .
Diff(b.cpu(), mlp.weight.T.cpu())
matmul_param = torch.matmul(a, weight.T)
matmul_tensor = torch.matmul(a, b.T)
matmul_mlp = mlp.forward(a)
print('\n====Diff Parameter and Tensor')
Diff(matmul_param.cpu(), matmul_tensor.cpu())
print('\n====Diff Parameter in class and Tensor')
Diff(matmul_mlp.cpu(), matmul_param.cpu())
matmul_test()
```
diff:
```
Max difference: 0.0
====Diff Parameter and Tensor
Max difference: 0.0
====Diff Parameter in class and Tensor
Max difference: 7.275957614183426e-12
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.17
Python version: 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-147.mt20200626.413.el8_1.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA Graphics Device
GPU 1: NVIDIA Graphics Device
GPU 2: NVIDIA Graphics Device
GPU 3: NVIDIA Graphics Device
GPU 4: NVIDIA Graphics Device
GPU 5: NVIDIA Graphics Device
GPU 6: NVIDIA Graphics Device
GPU 7: NVIDIA Graphics Device
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.4
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7713 64-Core Processor
Stepping: 1
CPU MHz: 3092.771
CPU max MHz: 2000.0000
CPU min MHz: 1500.0000
BogoMIPS: 3992.47
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.1
[pip3] torchdata==0.9.0
[pip3] torchmetrics==1.6.0
[pip3] torchrec==1.0.0
[pip3] triton==3.0.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] torchdata 0.9.0 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchrec 1.0.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
| true
|
2,889,297,443
|
[AOTI] Fix aot_inductor_package test errors
|
desertfire
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary: Fix fbcode test failures introduced by https://github.com/pytorch/pytorch/pull/147975. Make sure script.ld is copied to the build-time directory.
Differential Revision: D70454149
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,889,263,474
|
INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/NamedTensorUtils.cpp":163, please report a bug to PyTorch
|
cybersupersoap
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
### 🐛 Describe the bug
A `INTERNAL ASSERT FAILED` will be raised when using `torch.tensor`
```python
import torch
tensor_names = []
x = torch.tensor([[1, 2, 3, 4], [4, 3, 2, 1]], dtype=torch.float32, names=tensor_names)
```
Error messages:
```
RuntimeError: !names.empty() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/NamedTensorUtils.cpp":163, please report a bug to PyTorch. propagate_names: passed in empty names to propagate to result with shape [2, 4]. Empty names means that name inference didnot occur; use `propagate_names_if_nonempty` instead of `propagate_names`.
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @wenzhe-nrv
| true
|
2,889,263,463
|
[MPS] Speedup interpolation
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148277
First of all, perf claims made in https://github.com/pytorch/pytorch/pull/145581 and https://github.com/pytorch/pytorch/pull/148154 are too good to be true (due to the bug in the script that did not call `torch.mps.synchronize` at the end of the benchmark script, but still slightly better than MPS, probably due to the launch overhead.
And while measure performance correctly, I've noticed that a lot of time is spent on 64-bit integral division of thread_index to get spatial coordinates. Simply downcasting divisior to 32-bit integer (which is also the thread index) speeds it up almost 2x for bilinear and bicubic as could be demonstrated by running following script
```python
import torch
import time
import subprocess
import itertools
def benchmark(device, dtype, mode="bilinear", antialias=False, sf=.5):
# Create example inputs
x = torch.testing.make_tensor(1, 1, 2048, 2048, device=device, dtype=dtype)
# define kwargs
kwargs = {"antialias": antialias, "mode": mode, "scale_factor": sf}
# Skip for unimplemented flavors
if antialias and mode == "bicubic" and device == "mps":
return None, "Skip"
elif antialias and dtype != torch.float32:
if device == "cpu":
return None, "Skip"
outputs_match = None
else:
# Check output
y = torch.nn.functional.interpolate(x, **kwargs)
z = torch.nn.functional.interpolate(x.cpu(), **kwargs)
outputs_match = torch.allclose(y.cpu(), z)
if not outputs_match:
atol = (y.cpu() - z).abs().max()
rtol = ((y.cpu() - z)[z!=0]/z[z!=0]).abs().max()
print(f"atol={atol} rtol={rtol}")
# Measure time manually
start_time = time.time() * 1000
for _ in range(1000):
y = torch.nn.functional.interpolate(x, **kwargs)
torch.mps.synchronize()
end_time = time.time() * 1000
manual_delta = (end_time - start_time)
average_time = f"{manual_delta:6.1f}"
return "True " if outputs_match else "False", average_time
brand_string = subprocess.check_output(['sysctl', '-n', 'machdep.cpu.brand_string']).decode("utf-8").strip()
for mode,antialias in itertools.product(["bilinear", "bicubic"], [False, True]):
outputs_match_list = []
average_time_list = []
for device in ["mps", "cpu"]:
for dtype in [torch.float32, torch.float16, torch.bfloat16]:
outputs_match, average_time = benchmark(device, dtype, mode=mode, antialias=antialias)
outputs_match_list.append(str(outputs_match))
average_time_list.append(average_time)
print(f"\nBenchmarking Results (collected on {brand_string}) for {mode} interpolation {'with antialias' if antialias else ''}:")
print("-"*40)
print("Device : MPS | CPU")
print("Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16")
print(f"Outputs Match : ", " | ".join(outputs_match_list))
print(f"Average Time (us) :", " |".join(average_time_list))
```
Before
```
Benchmarking Results (collected on Apple M4 Pro) for bilinear interpolation :
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : True | True | True | True | True | True
Average Time (us) : 292.0 | 264.7 | 267.9 | 289.1 | 230.9 | 309.1
atol=1.430511474609375e-06 rtol=0.11363636702299118
Benchmarking Results (collected on Apple M4 Pro) for bilinear interpolation with antialias:
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : False | False | False | True | None | None
Average Time (us) : 698.3 | 684.2 | 683.8 | 851.0 |Skip |Skip
atol=2.086162567138672e-06 rtol=0.019750799983739853
Benchmarking Results (collected on Apple M4 Pro) for bicubic interpolation :
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : False | True | True | True | True | True
Average Time (us) : 314.3 | 301.0 | 298.8 | 681.5 | 616.7 | 833.7
```
After
```
Benchmarking Results (collected on Apple M4 Pro) for bilinear interpolation :
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : True | True | True | True | True | True
Average Time (us) : 119.9 | 98.9 | 98.6 | 289.8 | 231.9 | 308.5
atol=1.430511474609375e-06 rtol=0.05681818351149559
Benchmarking Results (collected on Apple M4 Pro) for bilinear interpolation with antialias:
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : False | False | False | True | None | None
Average Time (us) : 541.9 | 531.1 | 531.0 | 846.8 |Skip |Skip
atol=2.0265579223632812e-06 rtol=0.008604463189840317
Benchmarking Results (collected on Apple M4 Pro) for bicubic interpolation :
----------------------------------------
Device : MPS | CPU
Dtype : FP32 | FP16 | BF16 | FP32 | FP16 | BF16
Outputs Match : False | True | True | True | True | True
Average Time (us) : 314.3 | 301.0 | 298.8 | 681.5 | 616.7 | 833.7
```
TODO:
- Figure out if this ops make more sense as 3D jobs with n and c channels dispatch as one more dimension
| true
|
2,889,261,995
|
`torch.sparse.sum` can cause a `Segmentation fault (core dumped)`
|
cybersupersoap
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
### 🐛 Describe the bug
A `Floating point exception` will be raised when using `torch.sparse.sum`
```python
import torch
input = torch.sparse_coo_tensor(torch.tensor([[0, 1, -1], [2, 0, 2]]), torch.tensor([1, 2, 3]), torch.Size([3, 3]))
torch.sparse.sum(input, dim=-1)
```
Error messages:
```
Segmentation fault (core dumped)
```
The error is reproducible with the nightly-build version `2.7.0.dev20250208+cpu` .
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @wenzhe-nrv
### Versions
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250301+cu118
[pip3] torchaudio==2.6.0.dev20250301+cu118
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250301+cu118
[pip3] triton==3.2.0
[conda] No relevant packages
| true
|
2,889,258,121
|
`torch.nn.LazyConvTranspose1d` can cause a `Floating point exception (core dumped)`
|
cybersupersoap
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
A `Floating point exception` will be raised when using `torch.nn.LazyConvTranspose1d`
```python
try:
import torch
import numpy as np
input_data = torch.randn(3, 5, 7)
conv1d_transpose = torch.nn.LazyConvTranspose1d(3, 2, stride=2**31, padding=1) # Setting stride to an out-of-bounds value
output_data = conv1d_transpose(input_data)
print('success execution')
except Exception as e:
print(e)
print('failed execution')
```
Error messages:
```
Floating point exception (core dumped)
```
The error is reproducible with the nightly-build version `2.7.0.dev20250301` .
### Versions
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250208+cu124
[pip3] torchaudio==2.6.0.dev20250208+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.22.0.dev20250208+cu124
[pip3] triton==3.0.0
[conda] No relevant packages
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @EikanWang @wenzhe-nrv
| true
|
2,889,208,308
|
Updates to build rowwise scaled mm kernel on SM10.0a
|
danielvegamyhre
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"topic: build",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
## Summary
Update cmake files and RowwiseScaledMM.cu to build on SM10.0a arch.
**NOTE**: performance optimization will be done in separate follow up PRs
## Steps to verify build
1. Access devgpu/machine with B200 GPUs, verify B200s are visible w/ `nvidia-smi`
2. Install CUDA tookit 12.8
- e.g. see [Nvidia docs](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Rocky&target_version=9&target_type=rpm_local)
3. Verify CUDA toolkit installation
- e.g. `nvcc --version` should have `... Cuda compilation tools, release 12.8 ... ` in output
4. Set env var `TORCH_CUDA_ARCH_LIST=10.0a`
4. Build pytorch from source with this PR ([steps](https://github.com/pytorch/pytorch#from-source))
5. Uninstall `pytorch-triton` with `pip uninstall pytorch-triton`
6. Build and install triton from source: https://github.com/triton-lang/triton?tab=readme-ov-file#install-from-source
7. Run tests shown in test plan below
**NOTE**: performance optimization will be done in a separate PR. The goal of this PR is just to ensure it builds correctly.
## Test plan
- `python test/distributed/tensor/test_matrix_ops.py -k scaled_mm`: OK
- `python test/test_matmul_cuda.py -k rowwise`: OK
- `python test/test_flop_counter.py -k scaled_mm`: OK
- `python test/inductor/test_aot_inductor.py -k fp8`: OK
- `python test/inductor/test_fp8.py`: OK
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,889,173,850
|
SIGSEGV due to insufficient return value checking for PyFrame_GetLocals
|
thomasdullien
|
open
|
[
"needs reproduction",
"module: crash",
"triaged",
"module: python frontend"
] | 4
|
NONE
|
### 🐛 Describe the bug
I'm getting a SIGSEGV when running some Torch code locally. It appears to be a null pointer dereference caused by insufficient return value checking of PyFrame_GetLocals (which, starting from more recent Python versions, can in theory return NULL -- but all the code calling it blindly assumes it'll return a valid pointer, and happily dereferences it).
Below is the GDB trace:
```
Starting program: /home/thomasdullien/python-env/pytorch/bin/python3 ./experiments2.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffd45ff6c0 (LWP 23487)]
[New Thread 0x7fffd3dfe6c0 (LWP 23488)]
[New Thread 0x7fffd35fd6c0 (LWP 23489)]
[New Thread 0x7fffd2dfc6c0 (LWP 23490)]
[New Thread 0x7fffd25fb6c0 (LWP 23491)]
[New Thread 0x7fffd1dfa6c0 (LWP 23492)]
[New Thread 0x7fffd15f96c0 (LWP 23493)]
[New Thread 0x7fffd0df86c0 (LWP 23494)]
[New Thread 0x7fffd05f76c0 (LWP 23495)]
[New Thread 0x7fffcfdf66c0 (LWP 23496)]
[New Thread 0x7fffcf5f56c0 (LWP 23497)]
[New Thread 0x7fffcedf46c0 (LWP 23498)]
[New Thread 0x7fffce5f36c0 (LWP 23499)]
[New Thread 0x7fffcddf26c0 (LWP 23500)]
[New Thread 0x7fffcd5f16c0 (LWP 23501)]
[New Thread 0x7ffef8d5e6c0 (LWP 23504)]
Quadro P2200
[New Thread 0x7ffef2fff6c0 (LWP 23505)]
[New Thread 0x7ffef27fe6c0 (LWP 23506)]
Epoch 1/5000
[New Thread 0x7ffeddbff6c0 (LWP 23508)]
[New Thread 0x7ffed9fff6c0 (LWP 23509)]
[New Thread 0x7ffed97fe6c0 (LWP 23510)]
Thread 1 "python3" received signal SIGSEGV, Segmentation fault.
0x00007fffc42bf856 in torch::profiler::impl::(anonymous namespace)::PythonTracer::recordPyCall(torch::profiler::impl::(anonymous namespace)::ThreadLocalResults&, _frame*, bool) () from /home/thomasdullien/python-env/pytorch/lib/python3.13/site-packages/torch/lib/libtorch_python.so
(gdb) x/20i $rip-0x20
0x7fffc42bf836 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1206>: add %al,(%rax)
0x7fffc42bf838 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1208>: call 0x7fffc3ca30b0 <PyFrame_GetLocals@plt>
0x7fffc42bf83d <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1213>: mov %rax,%rdi
0x7fffc42bf840 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1216>: lea 0x68005e(%rip),%rsi # 0x7fffc493f8a5
0x7fffc42bf847 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1223>: mov %rax,0x30(%rsp)
0x7fffc42bf84c <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1228>: call 0x7fffc3cb0cf0 <PyDict_GetItemString@plt>
0x7fffc42bf851 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1233>: mov %rax,0x38(%rsp)
=> 0x7fffc42bf856 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1238>: mov (%rax),%edx
0x7fffc42bf858 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1240>: add $0x1,%edx
0x7fffc42bf85b <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1243>: je 0x7fffc42bf85f <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1247>
0x7fffc42bf85d <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1245>: mov %edx,(%rax)
0x7fffc42bf85f <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1247>: mov %r13,%rdi
0x7fffc42bf862 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1250>: call 0x7fffc3cb1840 <PyFrame_GetBack@plt>
0x7fffc42bf867 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1255>: mov %rax,0x60(%rsp)
0x7fffc42bf86c <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1260>: mov %rax,%rsi
0x7fffc42bf86f <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1263>: test %rax,%rax
0x7fffc42bf872 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1266>: je 0x7fffc42bfbd0 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+2128>
0x7fffc42bf878 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1272>: mov 0x38(%rsp),%rax
0x7fffc42bf87d <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1277>: lea 0x98(%rsp),%rdi
0x7fffc42bf885 <_ZN5torch8profiler4impl12_GLOBAL__N_112PythonTracer12recordPyCallERNS2_18ThreadLocalResultsEP6_frameb+1285>: mov %rax,0x90(%rsp)
```
### Versions
```PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux trixie/sid (x86_64)
GCC version: (Debian 14.2.0-16) 14.2.0
Clang version: 19.1.7 (1+b1)
CMake version: version 3.31.5
Libc version: glibc-2.40
Python version: 3.13.2 (main, Feb 5 2025, 01:23:35) [GCC 14.2.0] (64-bit runtime)
Python platform: Linux-6.12.12-amd64-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro P2200
Nvidia driver version: 535.216.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2286M CPU @ 2.40GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 13
CPU(s) scaling MHz: 58%
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi sgx_lc md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 2 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD
| true
|
2,889,166,293
|
[MPS] metal unary kernel for sqrt
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: performance",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 10
|
COLLABORATOR
|
Issue #148219 highlighted the high dispatch times of ops which ran with MPS Graph on smaller tensors. This PR rewrites the sqrt with metal kernel to mitigate that issue
## Speedups:
Matrix size means NxN matrix here.

Code to generate the times(needs building the torch with old time and new time):
```python
import torch
import numpy as np
import time
import csv
matrix_sizes = [1, 100, 1000, 10_000]
num_runs = 1000
warmup_runs = 3
def run_sqrt(A):
torch.mps.synchronize()
start = time.perf_counter()
c = torch.sqrt(A)
torch.mps.synchronize()
end = time.perf_counter()
return c, end - start
results = {
'N': [],
'mean_time': [],
'std_time': []
}
for n in matrix_sizes:
print(f"\nBenchmarking N={n}")
try:
A_mps = torch.rand((n, n), dtype=torch.float32, device="mps")
for _ in range(warmup_runs):
_, _ = run_sqrt(A_mps)
times = []
for _ in range(num_runs):
_, t = run_sqrt(A_mps)
times.append(t)
mean_time = np.mean(times)
std_time = np.std(times)
results['N'].append(n)
results['mean_time'].append(mean_time)
results['std_time'].append(std_time)
print(f"Mean time: {mean_time:.4f}s ± {std_time:.4f}s")
except RuntimeError as e:
print(f"Error for N={n}: {e}")
continue
with open('sqrt_benchmark_times_new.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['N', 'mean_time', 'std_time'])
for i in range(len(results['N'])):
writer.writerow([
results['N'][i],
results['mean_time'][i],
results['std_time'][i]
])
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,889,162,744
|
Fix bug when Inductor include path contains spaces
|
vladkvit
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 14
|
CONTRIBUTOR
|
This PR fixes a bug with how include directories with spaces are handled on Windows. I ran into an edge case with torch.compile() - it will error out with an exception on Windows. In particular, it will try to execute the following: `cl /I C:/Program Files/Python311/Include ...`, where `C:/Program` will be treated as separate from `Files/Python311/Include`.
I looked into using something like `shlex.quote` or `pathlib.Path`, but I didn't find those options to be suitable (shlex is POSIX shell only, pathlib.Path does not escape spaces).
There is another place in the function that also deals with escaping spaces. My fix follows the same style. https://github.com/pytorch/pytorch/blob/0ff2e6a85a3264438aaec8bfb9c69b679ea835da/torch/_inductor/cpp_builder.py#L1464
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,889,141,383
|
[Inductor] Hot fix after #148011
|
anmyachev
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 5
|
COLLABORATOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @davidberard98
| true
|
2,889,097,171
|
Should DTensor support `Shard()` placement without dim requirement?
|
kwen2501
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 21
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
`ShardStorage` is name for a placement which shards a tensor element-wise, based on the elements' storage order, without describing which dimension to shard the tensor on.
(Edit: I realized that `Shard()` without dim specification can be used to denote the same meaning.)
It matches cases like FSDP or distributed optimizer, which are insensitive to sharding dimension and a little sensitive to element continuity (for performance reason).
It also solves cases where a "human preferred" dimension (0 for example) does not provide enough length for sharding, which limits the general applicability of distributed libraries.
If added, we should likely restrict `ShardStorage` to support only redistribute or element-wise operations.
### Alternatives
Alt 1: FSDP asks users for a sharding dimension.
This might impact FSDP's UI. And user probably don't know which dim to use either.
Alt 2: FSDP uses a heuristic to figure out a sharding dimension.
It may or may not work (because a tensor may not have a single dimension that's greater than the world size).
It also requires every library in similar situation to build its own heuristic.
Moreover, arbitrary dim support may make the `allgather_copy_in` kernel more difficult to write, esp when considering third-party hardware that wants to support FSDP.
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,889,093,221
|
ONNX Export Produces main_graph Instead of torch_jit and Fails on aten::format in PyTorch 2.x
|
antoninononooono
|
closed
|
[
"module: onnx",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
Description:
I am trying to export a Transformer-based speech model to ONNX using PyTorch 2.x. However, I encountered two major issues:
Exporting results in main_graph instead of a proper JIT script model, making the exported model unusable.
The aten::format operator is not supported in ONNX opset version 17, causing the export to fail entirely.
The only way I managed to export the model correctly was by downgrading to PyTorch 1.13.0, but I fear that this will become an impossible workaround as dependencies evolve.
I would like to know:
- How can I ensure that ONNX export produces a proper JIT model instead of main_graph?
- Is there an alternative way to handle aten::format without downgrading to PyTorch 1.13.0?
- Is there a roadmap for supporting aten::format in ONNX for PyTorch 2.x?
**Environment:**
- OS: Windows 11
- PyTorch Versions Tested: 2.x (failing), 1.13.0 (working)
- ONNX Opset Version: 17
- Export Method: torch.onnx.export()
- Backend: CPU
**Steps to Reproduce:**
Issue 1: ONNX Export Produces main_graph Instead of JIT
Load the model checkpoint (.pt) and convert it to TorchScript:
scripted_model = torch.jit.script(model)
Export to ONNX:
torch.onnx.export(
scripted_model,
input_dummy,
"output_model.onnx",
export_params=True,
opset_version=17,
do_constant_folding=True,
input_names=['input_tensor'],
output_names=['output'],
dynamic_axes={'input_tensor': {1: 'input_length'}, 'output': {0: 'output_length'}}
)
The exported model contains main_graph instead of the expected TorchScript structure, making it unusable.
Issue 2: aten::format Not Supported
The following error occurs during export:
**Error exporting model to ONNX:**
Exporting the operator 'aten::format' to ONNX opset version 17 is not supported.
Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues
Downgrading to PyTorch 1.13.0 fixes this issue, but this is not a sustainable solution.
**Model Details:**
- Transformer-based model for speech processing
- Uses torch.nn.TransformerEncoder
- Positional encoding applied
- Custom inference logic
**Expected Behavior:**
ONNX export should produce a JIT-compatible model instead of main_graph.
If aten::format is unsupported, there should be a recommended workaround.
**Additional Notes:**
Using torch.jit.trace() does not resolve the issue.
The error persists across different ONNX opsets.
Some workarounds suggest using torch.onnx.dynamo_export(), but this does not work in PyTorch 2.x either.
Looking forward to any guidance on resolving these issues. Thanks!
### Versions
Failed Version: PyTorch 2.x (e.g., 2.1.0+cu118)
Working Version: PyTorch 1.13.0+cu117
| true
|
2,889,093,188
|
Build rowwise scaled mm CUDA kernel on SM10.0a (B200)
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"topic: not user facing",
"topic: build"
] | 1
|
CONTRIBUTOR
|
WIP - a bunch of formatting changes are getting included in this PR automatically, need to exclude those changes somehow.
## Summary
Update cmake files and RowwiseScaledMM.cu to build on SM10.0a arch.
## Steps to verify build
1. Access devgpu/machine with B200 GPUs, verify B200s are visible w/ `nvidia-smi`
2. Install CUDA tookit 12.8
- e.g. see [Nvidia docs](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Rocky&target_version=9&target_type=rpm_local)
3. Verify CUDA toolkit installation
- e.g. `nvcc --version` should have `... Cuda compilation tools, release 12.8 ... ` in output
4. Set env var `TORCH_CUDA_ARCH_LIST=10.0a`
4. Build pytorch from source with this PR ([steps](https://github.com/pytorch/pytorch#from-source))
5. Run tests shown in test plan below
**NOTE**: performance optimization will be done in a separate PR. The goal of this PR is just to ensure it builds correctly.
## Test plan
- `python test/distributed/tensor/test_matrix_ops.py -k scaled_mm`: OK
- `python test/test_matmul_cuda.py -k rowwise`: OK
- `python test/test_flop_counter.py -k scaled_mm`: OK
**Note** inductor tests are failing, and it seems to be due to the fact that the triton version packaged with pytorch does not support the latest compute capability / arch: https://github.com/triton-lang/triton/issues/5737. However, it looks like we have a fix in pytorch planned for the next patch release: https://github.com/triton-lang/triton/pull/5765.
- `python test/inductor/test_fp8.py`: FAILING
- ```python: ../../../lib/Dialect/TritonGPU/Transforms/AccelerateMatmul.cpp:36: int mlir::triton::gpu::(anonymous namespace)::getMMAVersionSafe(int, DotOp): Assertion `false && "computeCapability not supported"' failed.
Aborted (core dumped)```
- `python test/inductor/test_aot_inductor.py -k fp8`: FAILING
- ```RuntimeError: Internal Triton PTX codegen error `ptxas` stderr: ptxas fatal : Value 'sm_100' is not defined for option 'gpu-name'```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,889,086,715
|
Fix dist.init_process_group on windows
|
H-Huang
|
closed
|
[
"oncall: distributed",
"module: windows",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148266
Fix https://github.com/pytorch/pytorch/issues/139990
We don't build libuv on windows so anything that creates `TCPStore` which includes `init_process_group()` will fail, which is a bad experience. We should just default to `USE_LIBUV=0` for windows. There were a decent amount of hits for this [error on google ](https://www.google.com/search?q=use_libuv+was+requested+but+PyTorch+was+build+without+libuv+support&sca_esv=921f59ac5f8bd98a&sxsrf=AHTn8zpG3PxdKoomFHkclOc451rBhoc3jw%3A1740854890873&source=hp&ei=albDZ5GHM-uIptQP4NTikQw&iflsig=ACkRmUkAAAAAZ8Nkei9H-aB2IBCk3pUOK3yFl5xBLZUt&ved=0ahUKEwiR5P7qxemLAxVrhIkEHWCqOMIQ4dUDCBg&uact=5&oq=use_libuv+was+requested+but+PyTorch+was+build+without+libuv+support&gs_lp=Egdnd3Mtd2l6IkN1c2VfbGlidXYgd2FzIHJlcXVlc3RlZCBidXQgUHlUb3JjaCB3YXMgYnVpbGQgd2l0aG91dCBsaWJ1diBzdXBwb3J0SABQAFgAcAB4AJABAJgBAKABAKoBALgBA8gBAPgBAvgBAZgCAKACAJgDAJIHAKAHAA&sclient=gws-wiz) and https://github.com/pytorch/pytorch/issues/139579, so I figured we should add a more helpful message as well.
We don't have CI for windows and our support is just best effort, so I just tested these changes on my windows machine.
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex, since I think we would run into the same issue for torchelastic when it creates the store
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,889,055,307
|
Fix macro for bit_cast in c10/util/bit_cast.h - one line change
|
wschin
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Fixes #148263.
| true
|
2,889,052,828
|
[BE][Ez]: Update fmt submodule to 11.1.4
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
This minor release is mostly bugfixes, ABI fixes, and compiler support fixes.
| true
|
2,889,046,820
|
Wrong macro used when building c10/util/bit_cast.h with std::bit_cast
|
wschin
|
closed
|
[
"module: build",
"triaged",
"bug"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
When building PyTorch with `clang++-17 -std=gnu++20 -x c++`, the macro https://github.com/pytorch/pytorch/blob/d23051f29ba01d0b5a1da03ed1f023bfe643b640/c10/util/bit_cast.h#L6 decides to use `bit_cast` from standard C++ library. However, that version of `clang++-17` does NOT implement it, so a link error happens. To fix this macro, I propose to change the condition
```cpp
#if __has_include(<bit>) && (__cplusplus >= 202002L || (defined(__cpp_lib_bit_cast) && __cpp_lib_bit_cast >= 201806L))
```
to
```cpp
#if __has_include(<bit>) && defined(__cpp_lib_bit_cast) && __cpp_lib_bit_cast >= 201806L
```
FYI: clang++-17 version
Ubuntu clang version 17.0.6 (++20231208085846+6009708b4367-1~exp1~20231208085949.74)
FYI: content of standard bit.h. There is NO `bit_cast`.
```
// <bit> -*- C++ -*-
// Copyright (C) 2018-2020 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 3, or (at your option)
// any later version.
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// Under Section 7 of GPL version 3, you are granted additional
// permissions described in the GCC Runtime Library Exception, version
// 3.1, as published by the Free Software Foundation.
// You should have received a copy of the GNU General Public License and
// a copy of the GCC Runtime Library Exception along with this program;
// see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
// <http://www.gnu.org/licenses/>.
/** @file include/bit
* This is a Standard C++ Library header.
*/
#ifndef _GLIBCXX_BIT
#define _GLIBCXX_BIT 1
#pragma GCC system_header
#if __cplusplus >= 201402L
#include <type_traits>
#if _GLIBCXX_HOSTED
# include <ext/numeric_traits.h>
#else
# include <limits>
/// @cond undocumented
namespace __gnu_cxx
{
template<typename _Tp>
struct __int_traits
{
static constexpr int __digits = std::numeric_limits<_Tp>::digits;
static constexpr _Tp __max = std::numeric_limits<_Tp>::max();
};
}
/// @endcond
#endif
namespace std _GLIBCXX_VISIBILITY(default)
{
_GLIBCXX_BEGIN_NAMESPACE_VERSION
/**
* @defgroup bit_manip Bit manipulation
* @ingroup numerics
*
* Utilities for examining and manipulating individual bits.
*
* @{
*/
/// @cond undoc
template<typename _Tp>
constexpr _Tp
__rotl(_Tp __x, int __s) noexcept
{
constexpr auto _Nd = __gnu_cxx::__int_traits<_Tp>::__digits;
const int __r = __s % _Nd;
if (__r == 0)
return __x;
else if (__r > 0)
return (__x << __r) | (__x >> ((_Nd - __r) % _Nd));
else
return (__x >> -__r) | (__x << ((_Nd + __r) % _Nd)); // rotr(x, -r)
}
template<typename _Tp>
constexpr _Tp
__rotr(_Tp __x, int __s) noexcept
{
constexpr auto _Nd = __gnu_cxx::__int_traits<_Tp>::__digits;
const int __r = __s % _Nd;
if (__r == 0)
return __x;
else if (__r > 0)
return (__x >> __r) | (__x << ((_Nd - __r) % _Nd));
else
return (__x << -__r) | (__x >> ((_Nd + __r) % _Nd)); // rotl(x, -r)
}
template<typename _Tp>
constexpr int
__countl_zero(_Tp __x) noexcept
{
using __gnu_cxx::__int_traits;
constexpr auto _Nd = __int_traits<_Tp>::__digits;
if (__x == 0)
return _Nd;
constexpr auto _Nd_ull = __int_traits<unsigned long long>::__digits;
constexpr auto _Nd_ul = __int_traits<unsigned long>::__digits;
constexpr auto _Nd_u = __int_traits<unsigned>::__digits;
if _GLIBCXX17_CONSTEXPR (_Nd <= _Nd_u)
{
constexpr int __diff = _Nd_u - _Nd;
return __builtin_clz(__x) - __diff;
}
else if _GLIBCXX17_CONSTEXPR (_Nd <= _Nd_ul)
{
constexpr int __diff = _Nd_ul - _Nd;
return __builtin_clzl(__x) - __diff;
}
else if _GLIBCXX17_CONSTEXPR (_Nd <= _Nd_ull)
{
constexpr int __diff = _Nd_ull - _Nd;
return __builtin_clzll(__x) - __diff;
}
else // (_Nd > _Nd_ull)
{
static_assert(_Nd <= (2 * _Nd_ull),
"Maximum supported integer size is 128-bit");
unsigned long long __high = __x >> _Nd_ull;
if (__high != 0)
{
constexpr int __diff = (2 * _Nd_ull) - _Nd;
return __builtin_clzll(__high) - __diff;
}
constexpr auto __max_ull = __int_traits<unsigned long long>::__max;
unsigned long long __low = __x & __max_ull;
return (_Nd - _Nd_ull) + __builtin_clzll(__low);
}
}
template<typename _Tp>
constexpr int
__countl_one(_Tp __x) noexcept
{
return std::__countl_zero<_Tp>((_Tp)~__x);
}
template<typename _Tp>
constexpr int
__countr_zero(_Tp __x) noexcept
{
using __gnu_cxx::__int_traits;
constexpr auto _Nd = __int_traits<_Tp>::__digits;
if (__x == 0)
return _Nd;
constexpr auto _Nd_ull = __int_traits<unsigned long long>::__digits;
constexpr auto _Nd_ul = __int_traits<unsigned long>::__digits;
constexpr auto _Nd_u = __int_traits<unsigned>::__digits;
if _GLIBCXX17_CONSTEXPR (_Nd <= _Nd_u)
return __builtin_ctz(__x);
else if _GLIBCXX17_CONSTEXPR (_Nd <= _Nd_ul)
return __builtin_ctzl(__x);
else if _GLIBCXX17_CONSTEXPR (_Nd <= _Nd_ull)
return __builtin_ctzll(__x);
else // (_Nd > _Nd_ull)
{
static_assert(_Nd <= (2 * _Nd_ull),
"Maximum supported integer size is 128-bit");
constexpr auto __max_ull = __int_traits<unsigned long long>::__max;
unsigned long long __low = __x & __max_ull;
if (__low != 0)
return __builtin_ctzll(__low);
unsigned long long __high = __x >> _Nd_ull;
return __builtin_ctzll(__high) + _Nd_ull;
}
}
template<typename _Tp>
constexpr int
__countr_one(_Tp __x) noexcept
{
return std::__countr_zero((_Tp)~__x);
}
template<typename _Tp>
constexpr int
__popcount(_Tp __x) noexcept
{
using __gnu_cxx::__int_traits;
constexpr auto _Nd = __int_traits<_Tp>::__digits;
constexpr auto _Nd_ull = __int_traits<unsigned long long>::__digits;
constexpr auto _Nd_ul = __int_traits<unsigned long>::__digits;
constexpr auto _Nd_u = __int_traits<unsigned>::__digits;
if _GLIBCXX17_CONSTEXPR (_Nd <= _Nd_u)
return __builtin_popcount(__x);
else if _GLIBCXX17_CONSTEXPR (_Nd <= _Nd_ul)
return __builtin_popcountl(__x);
else if _GLIBCXX17_CONSTEXPR (_Nd <= _Nd_ull)
return __builtin_popcountll(__x);
else // (_Nd > _Nd_ull)
{
static_assert(_Nd <= (2 * _Nd_ull),
"Maximum supported integer size is 128-bit");
constexpr auto __max_ull = __int_traits<unsigned long long>::__max;
unsigned long long __low = __x & __max_ull;
unsigned long long __high = __x >> _Nd_ull;
return __builtin_popcountll(__low) + __builtin_popcountll(__high);
}
}
template<typename _Tp>
constexpr bool
__has_single_bit(_Tp __x) noexcept
{ return std::__popcount(__x) == 1; }
template<typename _Tp>
constexpr _Tp
__bit_ceil(_Tp __x) noexcept
{
using __gnu_cxx::__int_traits;
constexpr auto _Nd = __int_traits<_Tp>::__digits;
if (__x == 0 || __x == 1)
return 1;
auto __shift_exponent = _Nd - std::__countl_zero((_Tp)(__x - 1u));
// If the shift exponent equals _Nd then the correct result is not
// representable as a value of _Tp, and so the result is undefined.
// Want that undefined behaviour to be detected in constant expressions,
// by UBSan, and by debug assertions.
#ifdef _GLIBCXX_HAVE_BUILTIN_IS_CONSTANT_EVALUATED
if (!__builtin_is_constant_evaluated())
{
__glibcxx_assert( __shift_exponent != __int_traits<_Tp>::__digits );
}
#endif
using __promoted_type = decltype(__x << 1);
if _GLIBCXX17_CONSTEXPR (!is_same<__promoted_type, _Tp>::value)
{
// If __x undergoes integral promotion then shifting by _Nd is
// not undefined. In order to make the shift undefined, so that
// it is diagnosed in constant expressions and by UBsan, we also
// need to "promote" the shift exponent to be too large for the
// promoted type.
const int __extra_exp = sizeof(__promoted_type) / sizeof(_Tp) / 2;
__shift_exponent |= (__shift_exponent & _Nd) << __extra_exp;
}
return (_Tp)1u << __shift_exponent;
}
template<typename _Tp>
constexpr _Tp
__bit_floor(_Tp __x) noexcept
{
constexpr auto _Nd = __gnu_cxx::__int_traits<_Tp>::__digits;
if (__x == 0)
return 0;
return (_Tp)1u << (_Nd - std::__countl_zero((_Tp)(__x >> 1)));
}
template<typename _Tp>
constexpr _Tp
__bit_width(_Tp __x) noexcept
{
constexpr auto _Nd = __gnu_cxx::__int_traits<_Tp>::__digits;
return _Nd - std::__countl_zero(__x);
}
/// @endcond
#if __cplusplus > 201703L
#define __cpp_lib_bitops 201907L
/// @cond undoc
template<typename _Tp, typename _Up = _Tp>
using _If_is_unsigned_integer
= enable_if_t<__is_unsigned_integer<_Tp>::value, _Up>;
/// @endcond
// [bit.rot], rotating
/// Rotate `x` to the left by `s` bits.
template<typename _Tp>
[[nodiscard]] constexpr _If_is_unsigned_integer<_Tp>
rotl(_Tp __x, int __s) noexcept
{ return std::__rotl(__x, __s); }
/// Rotate `x` to the right by `s` bits.
template<typename _Tp>
[[nodiscard]] constexpr _If_is_unsigned_integer<_Tp>
rotr(_Tp __x, int __s) noexcept
{ return std::__rotr(__x, __s); }
// [bit.count], counting
/// The number of contiguous zero bits, starting from the highest bit.
template<typename _Tp>
constexpr _If_is_unsigned_integer<_Tp, int>
countl_zero(_Tp __x) noexcept
{ return std::__countl_zero(__x); }
/// The number of contiguous one bits, starting from the highest bit.
template<typename _Tp>
constexpr _If_is_unsigned_integer<_Tp, int>
countl_one(_Tp __x) noexcept
{ return std::__countl_one(__x); }
/// The number of contiguous zero bits, starting from the lowest bit.
template<typename _Tp>
constexpr _If_is_unsigned_integer<_Tp, int>
countr_zero(_Tp __x) noexcept
{ return std::__countr_zero(__x); }
/// The number of contiguous one bits, starting from the lowest bit.
template<typename _Tp>
constexpr _If_is_unsigned_integer<_Tp, int>
countr_one(_Tp __x) noexcept
{ return std::__countr_one(__x); }
/// The number of bits set in `x`.
template<typename _Tp>
constexpr _If_is_unsigned_integer<_Tp, int>
popcount(_Tp __x) noexcept
{ return std::__popcount(__x); }
// [bit.pow.two], integral powers of 2
#define __cpp_lib_int_pow2 202002L
/// True if `x` is a power of two, false otherwise.
template<typename _Tp>
constexpr _If_is_unsigned_integer<_Tp, bool>
has_single_bit(_Tp __x) noexcept
{ return std::__has_single_bit(__x); }
/// The smallest power-of-two not less than `x`.
template<typename _Tp>
constexpr _If_is_unsigned_integer<_Tp>
bit_ceil(_Tp __x) noexcept
{ return std::__bit_ceil(__x); }
/// The largest power-of-two not greater than `x`.
template<typename _Tp>
constexpr _If_is_unsigned_integer<_Tp>
bit_floor(_Tp __x) noexcept
{ return std::__bit_floor(__x); }
/// The smallest integer greater than the base-2 logarithm of `x`.
template<typename _Tp>
constexpr _If_is_unsigned_integer<_Tp>
bit_width(_Tp __x) noexcept
{ return std::__bit_width(__x); }
#define __cpp_lib_endian 201907L
/// Byte order
enum class endian
{
little = __ORDER_LITTLE_ENDIAN__,
big = __ORDER_BIG_ENDIAN__,
native = __BYTE_ORDER__
};
#endif // C++2a
/// @}
_GLIBCXX_END_NAMESPACE_VERSION
} // namespace std
#endif // C++14
#endif // _GLIBCXX_BIT
```
### Versions
latest main branch
cc @malfet @seemethere
| true
|
2,889,044,080
|
Typo Errors fixed in multiple files
|
ENUMERA8OR
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"release notes: mobile",
"module: dynamo",
"release notes: distributed (checkpoint)",
"module: compiled autograd",
"oncall: distributed checkpointing"
] | 6
|
CONTRIBUTOR
|
# Fix typo errors across PyTorch codebase
This PR fixes various spelling errors throughout the PyTorch codebase to improve documentation quality and code readability.
## Changes Made
### Documentation Fixes
- Changed "seperate" to "separate" in multiple files:
- `setup.py`: Build system documentation
- `torch/_library/triton.py`: AOT compilation comments
- `torch/csrc/dynamo/compiled_autograd.h`: Node compilation documentation
- `torch/export/_unlift.py`: Pass population comments
- `torch/export/exported_program.py`: Decomposition table notes
### Code Comments and Error Messages
- Changed "occured" to "occurred" in:
- `test/mobile/test_lite_script_module.py`: Exception handling comments
- `torch/export/_draft_export.py`: Error message text
- `aten/src/ATen/native/cuda/linalg/BatchLinearAlgebra.cpp`: MAGMA bug comment
- `torch/csrc/utils/python_numbers.h`: Overflow handling comment
- `torch/csrc/jit/OVERVIEW.md`: Graph compilation documentation
- `torch/_dynamo/symbolic_convert.py`: Error explanation
### API Documentation
- Changed "fullfill" to "fulfill" in `torch/distributed/checkpoint/state_dict_loader.py`
- Changed "accross" to "across" in:
- `torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp`
- `torch/distributed/distributed_c10d.py`
## Motivation
These changes improve code readability and maintain consistent spelling throughout the codebase. No functional changes were made; this is purely a documentation and comment improvement PR.
## Test Plan
No testing required as these changes only affect comments and documentation.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan
| true
|
2,889,042,542
|
[fx] Move Node._prepend/Node._remove_from_list to C++
|
jansel
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148292
* #148288
* __->__ #148261
* #148260
* #148243
Microbenchmarking `fx.symbolic_trace(lambda x: functools.reduce(operator.add, [x, *range(100000)]))`, before:
```
24303536 function calls (23503339 primitive calls) in 10.726 seconds
```
after:
```
20003454 function calls (19203257 primitive calls) in 8.936 seconds
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,889,042,516
|
[fx] Move Node._update_args_kwargs to C++
|
jansel
|
closed
|
[
"Merged",
"Reverted",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148292
* #148288
* #148261
* __->__ #148260
* #148243
Microbenchmarking `fx.symbolic_trace(lambda x: functools.reduce(operator.add, [x, *range(100000)]))`, before:
```
25203549 function calls (24403352 primitive calls) in 12.090 seconds
```
after:
```
24303536 function calls (23503339 primitive calls) in 10.726 seconds
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,889,020,952
|
Raise a warning when `torch.nn.utils.clip_grad_norm_` receives an exhausted generator
|
Orimalca
|
open
|
[
"module: nn",
"module: error checking",
"triaged",
"actionable"
] | 2
|
NONE
|
The current clip_grad_norm_ and clip_grad_value_ functions accept an Iterable[Tensor] as input. However, if that iterable is a generator that has already been consumed, then it’s effectively empty when passed in, and no gradient clipping occurs—silently. This can cause subtle bugs when the user thinks they are clipping gradients but actually aren’t.
Consider the following scenario:
```python
model = ... # some `torch.nn.Module` instance
y = model(x) # forward pass
loss = ... # calc the loss
loss.backward() # calculating gradients
params_to_clip = model.parameters() # a generator object
# e.g., iterate over the model's parameters (for some reason), unknowingly exhausting the generator
for p in params_to_clip:
... # do something with p
# now, the user wants to clip the gradients before doing an update step
torch.nn.utils.clip_grad_norm_(params_to_clip, ...)
# NOTE: the user unknowingly passes an empty/exhausted generator to `clip_grad_norm_` means no clipping is performed, but no warning or error is raised, leaving the user unaware.
# The user is unknowingly updating the model with unclipped gradients (since the generator was exhausted)
optimizer.step()
optimizer.zero_grad()
```
In practice, it is rare for a model to truly have zero parameters, so an empty generator is almost always a bug—often the user simply isn’t aware that model.parameters() is a generator. This can lead to unstable training if gradients aren’t actually clipped as intended.
### Alternatives
Raise a warning if clip_grad_norm_ (or clip_grad_value_) detects an exhausted generator. For instance:
```python
import types
import warnings
def clip_grad_norm_(...) -> torch.Tensor:
if isinstance(parameters, torch.Tensor):
parameters = [parameters]
else:
params_is_gen = isinstance(parameters, types.GeneratorType)
# prevent generators from being exhausted
parameters = list(parameters)
if params_is_gen and len(parameters) == 0:
warnings.warn(
"`parameters` is an empty generator. This might be an unintended "
"result of previous operations, meaning no gradient clipping "
"will occur. If this is intentional, you can ignore this message."
)
# rest of the code remains unchanged
...
```
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet
| true
|
2,888,998,456
|
[BE]: No include left behind - recursive glob setuptools support
|
Skylion007
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: build",
"topic: improvements",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Fixes #148256
TestPlan check the printout from the setup.py build and verify the files are still included.
| true
|
2,888,991,921
|
[FSDP2] HSDP with globally sharded fp32 weights and optimizer states
|
ChrisLiu6
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 5
|
NONE
|
### 🚀 The feature, motivation and pitch
First, I hope to show respect to the FSDP/FSDP2 team. I have been using FSDP for a long time and am recently working on migrating to FSDP2. I feel the new APIs are now more user-friendly and logically sound. Thanks so much for your efforts!
I am writing to inquiry if the following feature can/will be supported by FSDP2:
Let's consider HSDP as the starting point. When working with HSDP + gradient accumulation, I want
1. (not supported) the FP32 model parameters and optimizer states are sharded globally across the whole DP_replicate * DP_shard ranks.
2. (supported) After forward, the parameters are resharded onto the intra-node 8 gpus (which can be achieved with `reshard_after_forward=True`
3. (supported) After backward, the parameters and gradients are resharded onto the intra-node 8 gpus **if not the last gradient accumulation step**
By the way, I currently implement this through the following, is that correct?
```
y = model(x)
model.set_requires_gradient_sync(True, recurse=True)
model.set_requires_all_reduce(last_accumulate_step, recurse=True)
model.set_reshard_after_backward(True, recurse=True)
model.set_reshard_after_backward(last_accumulate_step, recurse=False)
y.backward()
```
4. (unsupported) at the last accumulation step, I hope the intra-node-sharded gradients to be reduced-scattered, instead of all-reduced, among all dp_replicate\*dp_shard ranks. Parameters are optimized on each rank, and afterwards in the next first forward, the updated parameters are all-gathered from dp_replicate\*dp_shard ranks
I also think the aforementioned is equivalent to, considering 1d mesh (namely not HSDP) as the starting point, the following (suppose we have 4 nodes * 8GPUs/node = 32 GPUs:
1. (supported) after forward, shard the parameters with each node, which supported by `reshard_after_forward=8`
2. (not supported) after backward, if not last accumulation step, parameters are resharded **within each node** instead of globally (semanticallly similar to `reshard_after_backlward=8, which is unsupported as only bool values are accepted`
3. (not supported) similar to parameters, the gradients should also be sharded with node if not last accumulation step
4. after the last accumulation step, reduce_scatter the accumulated and intra-node-sharded gradients among the total 32 gpus to update
I think this functionality can help FSPD2 better handle situations where large gradient accumulation is needed. Don't hesitate to correct me if I make anything wrong. Looking forward to your reply!
Thanks!
@awgu
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,888,987,598
|
Simplify package_data handling in setup.py
|
Skylion007
|
closed
|
[
"module: build",
"triaged",
"enhancement"
] | 1
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
As of setuptools v62.3.0, package_data field in setup.py now finally supports recursive glob. https://github.com/pypa/setuptools/blob/v62.3.0/CHANGES.rst This means we can include all the header files in our setup.py and won't run into issues when forgetting to include them or manage a list of headers we need to add. All we have to do is put proper version bounds in our build_requirements so that we always huild with at least 62.3.0. This setuptools is from 2022, and should support all the way down to our minimum supported python version.
### Alternatives
_No response_
### Additional context
_No response_
cc @malfet @seemethere
| true
|
2,888,921,729
|
Set requires grad in TensorMaker::make_tensor()
|
irshadcc
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp"
] | 33
|
CONTRIBUTOR
|
Fixes #146419
| true
|
2,888,877,317
|
[Inductor][CPP] Fix the vec codegen for tanh
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148254
**Summary**
Fix https://github.com/pytorch/pytorch/issues/148241, The previous vectorized code generation for `tanh` used a decomposed implementation, leading to numerical differences that were further amplified by `atan2`. For example, in the given test case after `tanh`, the eager output at `[0,0,11,47]` was `-5.820766091346741e-10`, while the compiled output was `1.4319084584712982e-08`, resulting in different `atan2` outputs of `-2.3561` and `0.7853`. This issue is fixed by switching to the Sleef implementation.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_repro.py -k test_tanh_atan2
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,830,734
|
Improve Notation for Score Function in Documentation
|
songyuc
|
open
|
[
"module: distributions",
"module: docs",
"triaged",
"actionable"
] | 1
|
NONE
|
### 📚 The doc issue
**Description:**
I noticed that the current PyTorch documentation on the distributions page (https://pytorch.org/docs/stable/distributions.html) presents the score function using the following formula:
$$
\Delta \theta = \alpha r\frac{\partial \log p\left(a|\pi^\theta(s)\right)}{\partial \theta}
$$
I believe this notation could be improved for greater clarity and rigor:
1. **Gradient Notation:**
The symbol $\partial$ is typically reserved for partial derivatives with respect to a single variable. However, since $\theta$ is a parameter vector, it is more conventional to use the gradient operator $\nabla_\theta$ to indicate differentiation with respect to all components of $\theta$. For example, the update should be written as:
$$\Delta \theta = \alpha \, r \, \nabla_\theta \log \pi_\theta(a \mid s)$$
2. **Policy Notation:**
The notation $\pi^\theta(s)$ is less common. It is more standard to denote the policy as $\pi_\theta(a \mid s)$, emphasizing that the policy outputs the probability of taking action $a$ in state $s$ given the parameters $\theta$.
3. **Sign Convention:**
Depending on whether one is performing gradient ascent or descent, a negative sign may be necessary. In practice, the loss is often defined as:
$$L(\theta) = -r \log \pi_\theta(a \mid s)$$
so that when using gradient descent, the update rule remains consistent as:
$$\Delta \theta = \alpha \, r \, \nabla_\theta \log \pi_\theta(a \mid s)$$
A brief note on this might help clarify potential confusion.
### Suggest a potential alternative/fix
**Suggestion:**
I propose updating the documentation to use the more conventional notation:
$$
\Delta \theta = \alpha \, r \, \nabla_\theta \log \pi_\theta(a \mid s)
$$
Additionally, including a note on incorporating a baseline (e.g., using $r - b$ instead of just $r$) to reduce variance in the gradient estimation could further improve clarity in the context of reinforcement learning.
Thank you for considering this suggestion. I look forward to any discussion on improving the clarity of the documentation!
cc @fritzo @neerajprad @alicanb @nikitaved @svekars @sekyondaMeta @AlannaBurke
| true
|
2,888,824,937
|
Can't pass `strict=False` when loading a distributed checkpoint. Succeeds without warnings for "unexpected" keys, fails for "missing" keys.
|
baldassarreFe
|
closed
|
[
"triaged",
"oncall: distributed checkpointing"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Goal: save a model with distributed checkpointer, then load it into a smaller model (drop the extra parameters) or into a bigger model (don't change the missing parameters).
With the default pytorch functions `state_dict()` and `load_state_dict()`, it's easy to pass `strict=False` and get a summary of "unexpected" keys and "missing" keys.
With the distributed checkpointer, there is no way of controlling this behavior, i.e. no `strict` parameter to `dcp.load()`. The current behavior (undocumented?) is:
- If the checkpoint on disk has "unexpected" keys, they are silently ignored and the call succeeds
- If the checkpoint on disk has "missing" keys, raise an error
From the [documentation](https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.StateDictOptions), I can see that a class `StateDictOptions` exists and that it contains a `strict` parameter that seems to match my requirements. The documentation says that `StateDictOptions` can be passed as a parameter to `set_state_dict()`. However, an error is raised inside `dcp.load()` before the call `set_state_dict()` in my code.
Simple test script:
```python
import tempfile
import torch
import torch.distributed.checkpoint as dcp
import torch.distributed.checkpoint.state_dict as dcpsd
def reload_same_model():
"""Save and reload a model and its optimizer."""
model = Model()
optimizer = torch.optim.AdamW(model.parameters())
for n, p in model.named_parameters():
print(f"{n:<15}: {p.norm(p=1).item()}")
to_save = {
"iteration": 10,
"model": dcpsd.get_model_state_dict(model),
"optimizer": dcpsd.get_optimizer_state_dict(model, optimizer),
}
tmp_dir = tempfile.mkdtemp()
dcp.save(to_save, storage_writer=dcp.filesystem.FileSystemWriter(tmp_dir))
model = Model()
optimizer = torch.optim.AdamW(model.parameters())
to_load = {
"iteration": -1,
"model": dcpsd.get_model_state_dict(model),
"optimizer": dcpsd.get_optimizer_state_dict(model, optimizer),
}
dcp.load(to_load, storage_reader=dcp.filesystem.FileSystemReader(tmp_dir))
dcpsd.set_model_state_dict(model, to_load["model"])
dcpsd.set_optimizer_state_dict(model, optimizer, to_load["optimizer"])
for n, p in model.named_parameters():
print(f"{n:<15}: {p.norm(p=1).item()}")
print(to_load["iteration"])
def load_into_smaller_model():
"""Save a model and load it into a smaller model."""
model = Model()
for n, p in model.named_parameters():
print(f"{n:<15}: {p.norm(p=1).item()}")
to_save = {
"iteration": 10,
"model": dcpsd.get_model_state_dict(model),
}
tmp_dir = tempfile.mkdtemp()
dcp.save(to_save, storage_writer=dcp.filesystem.FileSystemWriter(tmp_dir))
model = SmallerModel()
to_load = {
"iteration": -1,
"model": dcpsd.get_model_state_dict(model),
}
dcp.load(to_load, storage_reader=dcp.filesystem.FileSystemReader(tmp_dir))
dcpsd.set_model_state_dict(model, to_load["model"]) # No need for StateDictOptions(strict=False), why?
for n, p in model.named_parameters():
print(f"{n:<15}: {p.norm(p=1).item()}")
print(to_load["iteration"])
def load_into_bigger_model():
"""Save a model and load it into a bigger model."""
model = Model()
for n, p in model.named_parameters():
print(f"{n:<15}: {p.norm(p=1).item()}")
to_save = {
"iteration": 10,
"model": dcpsd.get_model_state_dict(model),
}
tmp_dir = tempfile.mkdtemp()
dcp.save(to_save, storage_writer=dcp.filesystem.FileSystemWriter(tmp_dir))
model = BiggerModel()
to_load = {
"iteration": -1,
"model": dcpsd.get_model_state_dict(model),
}
dcp.load(to_load, storage_reader=dcp.filesystem.FileSystemReader(tmp_dir))
dcpsd.set_model_state_dict(model, to_load["model"])
for n, p in model.named_parameters():
print(f"{n:<15}: {p.norm(p=1).item()}")
print(to_load["iteration"])
class Model(torch.nn.Sequential):
def __init__(self):
super().__init__(torch.nn.Linear(2, 4), torch.nn.Linear(4, 8))
class SmallerModel(torch.nn.Sequential):
def __init__(self):
super().__init__(torch.nn.Linear(2, 4))
class BiggerModel(torch.nn.Sequential):
def __init__(self):
super().__init__(torch.nn.Linear(2, 4), torch.nn.Linear(4, 8), torch.nn.Linear(8, 16))
if __name__ == "__main__":
reload_same_model() # OK
load_into_smaller_model() # OK, doesn't warn about "unexpected" keys
load_into_bigger_model() # RuntimeError: Missing key in checkpoint state_dict: model.2.weight
```
Error:
```
Traceback (most recent call last):
File "scratch.py", line 103, in <module>
load_into_bigger_model() # RuntimeError: Missing key in checkpoint state_dict: model.2.weight
^^^^^^^^^^^^^^^^^^^^^^^^
File "scratch.py", line 78, in load_into_bigger_model
dcp.load(to_load, storage_reader=dcp.filesystem.FileSystemReader(tmp_dir))
File "torch/distributed/checkpoint/logger.py", line 83, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "torch/distributed/checkpoint/utils.py", line 438, in inner_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "torch/distributed/checkpoint/state_dict_loader.py", line 172, in load
_load_state_dict(
File "torch/distributed/checkpoint/state_dict_loader.py", line 229, in _load_state_dict
central_plan: LoadPlan = distW.reduce_scatter("plan", local_step, global_step)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/distributed/checkpoint/utils.py", line 192, in reduce_scatter
raise result
torch.distributed.checkpoint.api.CheckpointException: CheckpointException ranks:dict_keys([0])
Traceback (most recent call last): (RANK 0)
File "torch/distributed/checkpoint/utils.py", line 165, in reduce_scatter
local_data = map_fun()
^^^^^^^^^
File "torch/distributed/checkpoint/logger.py", line 83, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "torch/distributed/checkpoint/state_dict_loader.py", line 218, in local_step
local_plan = planner.create_local_plan()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/distributed/checkpoint/default_planner.py", line 233, in create_local_plan
return create_default_local_load_plan(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/distributed/checkpoint/default_planner.py", line 354, in create_default_local_load_plan
raise RuntimeError(f"Missing key in checkpoint state_dict: {fqn}.")
RuntimeError: Missing key in checkpoint state_dict: model.2.weight.
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250129+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] lovely-numpy==0.2.13
[pip3] mypy==1.14.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] open_clip_torch==2.30.0
[pip3] optree==0.14.0
[pip3] pynvjitlink-cu12==0.5.0
[pip3] pytorch-triton==3.2.0+gitb2684bf3
[pip3] torch==2.7.0.dev20250129+cu124
[pip3] torchaudio==2.6.0.dev20250129+cu124
[pip3] torchmetrics==1.6.1
[pip3] torchrl==0.7.1
[pip3] torchvision==0.22.0.dev20250129+cu124
[conda] cuda-cudart 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-cudart-dev 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-cudart-dev_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart-static 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-cudart-static_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cupti 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-cupti-static 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-libraries 12.4.1 0 nvidia/label/cuda-12.4.1
[conda] cuda-libraries-dev 12.4.1 0 nvidia/label/cuda-12.4.1
[conda] cuda-libraries-static 12.4.1 0 nvidia/label/cuda-12.4.1
[conda] cuda-nvrtc 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-nvrtc-dev 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-nvrtc-static 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-nvtx 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-opencl 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] cuda-opencl-dev 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] libcublas 12.4.5.8 0 nvidia/label/cuda-12.4.1
[conda] libcublas-dev 12.4.5.8 0 nvidia/label/cuda-12.4.1
[conda] libcublas-static 12.4.5.8 0 nvidia/label/cuda-12.4.1
[conda] libcufft 11.2.1.3 0 nvidia/label/cuda-12.4.1
[conda] libcufft-dev 11.2.1.3 0 nvidia/label/cuda-12.4.1
[conda] libcufft-static 11.2.1.3 0 nvidia/label/cuda-12.4.1
[conda] libcurand 10.3.5.147 0 nvidia/label/cuda-12.4.1
[conda] libcurand-dev 10.3.5.147 0 nvidia/label/cuda-12.4.1
[conda] libcurand-static 10.3.5.147 0 nvidia/label/cuda-12.4.1
[conda] libcusolver 11.6.1.9 0 nvidia/label/cuda-12.4.1
[conda] libcusolver-dev 11.6.1.9 0 nvidia/label/cuda-12.4.1
[conda] libcusolver-static 11.6.1.9 0 nvidia/label/cuda-12.4.1
[conda] libcusparse 12.3.1.170 0 nvidia/label/cuda-12.4.1
[conda] libcusparse-dev 12.3.1.170 0 nvidia/label/cuda-12.4.1
[conda] libcusparse-static 12.3.1.170 0 nvidia/label/cuda-12.4.1
[conda] libnvjitlink 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] libnvjitlink-dev 12.4.127 0 nvidia/label/cuda-12.4.1
[conda] libopenvino-pytorch-frontend 2024.6.0 h5888daf_3 conda-forge
[conda] lovely-numpy 0.2.13 pypi_0 pypi
[conda] nccl 2.24.3.1 hb92ee24_0 conda-forge
[conda] numpy 1.26.4 py311h64a7726_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] nvtx 0.2.10 py311h9ecbd09_2 conda-forge
[conda] open-clip-torch 2.30.0 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] pynvjitlink 0.5.0 py311hf640dd1_0 rapidsai
[conda] pytorch-triton 3.2.0+gitb2684bf3 pypi_0 pypi
[conda] torch 2.7.0.dev20250129+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250129+cu124 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchrl 0.7.1 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250129+cu124 pypi_0 pypi
```
cc @LucasLLC @pradeepfn
| true
|
2,888,778,960
|
Errors: train a model of sparsity with tensorrt-model-optimization and FSDP.
|
Vieeo
|
closed
|
[
"oncall: distributed",
"module: fsdp"
] | 3
|
NONE
|
### 🐛 Describe the bug
I’m training flux-dev model of sparsity with accelerate and FSDP.
This is FSDP config with accelerator.
distributed_type: FSDP
fsdp_config:
fsdp_auto_wrap_policy: SIZE_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_forward_prefetch: true
fsdp_min_num_params: 1000000
fsdp_offload_params: true
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
when I do:
flux = mto.restore(flux, sparse_ckpt)
flux = accelerator.prepare_model(flux)
print(flux)
### Errors as follows:
[rank1]: Traceback (most recent call last):
[rank1]: File “/data/train_flux.py”, line 447, in
[rank1]: main()
[rank1]: File “/data/train_flux.py”, line 165, in main
[rank1]: print(dit)
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 2943, in repr
[rank1]: mod_str = repr(module)
[rank1]: ^^^^^^^^^^^^
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 2943, in repr
[rank1]: mod_str = repr(module)
[rank1]: ^^^^^^^^^^^^
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/nn/modules/module.py”, line 2937, in repr
[rank1]: extra_repr = self.extra_repr()
[rank1]: ^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/opt/dynamic.py”, line 861, in extra_repr
[rank1]: val = getattr(self, name)
[rank1]: ^^^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/opt/dynamic.py”, line 806, in getattr
[rank1]: return manager.get_da_cb(name)(self, value)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/opt/dynamic.py”, line 83, in call
[rank1]: val = cb(self_module, val)
[rank1]: ^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/sparsity/module.py”, line 35, in _get_weight
[rank1]: masked_weight = weight * mod._weight_mask
[rank1]: ~^~~~~~~~~~~~
[rank1]: RuntimeError: The size of tensor a (0) must match the size of tensor b (64) at non-singleton dimension 1
### If I skip “print(flux)”, error as follows:
[rank1]: Traceback (most recent call last):
[rank1]: File “/data/train_flux.py”, line 438, in
[rank1]: main()
[rank1]: File “/data/train_flux.py”, line 374, in main
[rank1]: accelerator.backward(loss)
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/accelerate/accelerator.py”, line 2196, in backward
[rank1]: loss.backward(**kwargs)
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/_tensor.py”, line 581, in backward
[rank1]: torch.autograd.backward(
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/autograd/init.py”, line 347, in backward
[rank1]: _engine_run_backward(
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/autograd/graph.py”, line 825, in _engine_run_backward
[rank1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/utils/_contextlib.py”, line 116, in decorate_context
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/distributed/fsdp/_runtime_utils.py”, line 734, in _post_backward_hook
[rank1]: handle._use_unsharded_grad_views()
[rank1]: File “/root/miniforge3/envs/py312torch250/lib/python3.12/site-packages/torch/distributed/fsdp/_flat_param.py”, line 1982, in _use_unsharded_grad_views
[rank1]: hasattr(module, param_name),
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/opt/dynamic.py”, line 806, in getattr
[rank1]: return manager.get_da_cb(name)(self, value)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/opt/dynamic.py”, line 83, in call
[rank1]: val = cb(self_module, val)
[rank1]: ^^^^^^^^^^^^^^^^^^^^
[rank1]: File “/data/modelopt/torch/sparsity/module.py”, line 34, in _get_weight
[rank1]: masked_weight = weight * mod._weight_mask
[rank1]: ~^~~~~~~~~~~~
[rank1]: RuntimeError: The size of tensor a (2360064) must match the size of tensor b (3072) at non-singleton dimension 1
### Versions
Basic version info:
python 3.12.0
pytorch 2.5.0
nvidia-modelopt 0.21.0
cuda: 12.6
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,888,757,244
|
BrokenPipeError: [Errno 32] Broken pipe when lacking Numpy package
|
Cookiee235
|
open
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When the `Numpy` package is uninstalled, the torch will throw a warning message, but it crashes in the inference stage for the compiled model.
I do some experiments to isolate this bug:
* install the Numpy --> run well
* remove the `torch.compile` statement --> run well
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = torch.add(x, x)
return x
model = Model().eval().cuda()
x = torch.randn(1, 1).cuda()
inputs = [x]
# model = torch.compile(model) # only compiled model can trigger this bug
output = model(*inputs) # broken pipe
```
```
cpu = _conversion_method_template(device=torch.device("cpu"))
/data/af/miniconda3/envs/torch/lib/python3.13/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
(torch) [af@sccpu6 test_torch]$ Exception ignored in: <_io.BufferedWriter name=63>
BrokenPipeError: [Errno 32] Broken pipe
```
### Versions
```
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscplm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,888,732,369
|
Typo errors fixed in various files
|
ENUMERA8OR
|
closed
|
[
"oncall: distributed",
"open source",
"release notes: mobile",
"module: dynamo",
"module: compiled autograd"
] | 2
|
CONTRIBUTOR
|
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan
| true
|
2,888,727,774
|
Typo Errors fixed
|
ENUMERA8OR
|
closed
|
[] | 2
|
CONTRIBUTOR
| null | true
|
2,888,666,631
|
`torch.multinomial` outputs inconsistency on ARM and x86
|
Leo-Imperial
|
open
|
[
"module: distributions",
"triaged",
"module: random"
] | 7
|
NONE
|
### 🐛 Describe the bug
**Description:**
All tensors and generators are set up on the CPU, independent of specific devices. When experiments were conducted, running the code on NPU (ARM) and GPU (X86) produced differing results, and even within NPU (ARM) and NPU (X86), the outputs varied. However, running the same code on both NPU (X86) and GPU (X86) outputs consistently.
**Minimal example**:
```python
import torch
dataset_len = 142816
generator = torch.Generator()
generator = generator.manual_seed(42 + 1)
weights = [1. for _ in range(dataset_len)]
shuf = torch.multinomial(
torch.tensor(weights),
num_samples=dataset_len,
replacement=False,
generator=generator,
)
print(shuf)
```
**Outputs:**
**ARM:**
`tensor([129540, 11595, 108641, ..., 32274, 52564, 82136])`
**X86:**
`tensor([ 343, 139839, 105443, ..., 15203, 72413, 91169])`
### Versions
X86:
```
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166,168,170,172,174,176,178,180,182,184,186,188,190
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127,129,131,133,135,137,139,141,143,145,147,149,151,153,155,157,159,161,163,165,167,169,171,173,175,177,179,181,183,185,187,189,191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.22.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.18.1
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==2.1.0
[pip3] torch==2.1.0
[pip3] torchmetrics==1.6.1
[pip3] triton==2.1.0
[conda] mkl 2024.0.0 pypi_0 pypi
[conda] numpy 1.22.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.18.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-lightning 2.1.0 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
```
ARM:
```
PyTorch version: 2.1.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: openEuler 22.03 LTS (aarch64)
GCC version: (GCC) 10.3.1
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.34
Python version: 3.9.18 | packaged by conda-forge | (main, Dec 23 2023, 17:20:25) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.10.0-60.18.0.50.oe2203.aarch64-aarch64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: HiSilicon
BIOS Vendor ID: HiSilicon
Model name: Kunpeng-920
BIOS Model name: HUAWEI Kunpeng 920 5250
Model: 0
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 4
Stepping: 0x1
Frequency boost: disabled
CPU max MHz: 2600.0000
CPU min MHz: 200.0000
BogoMIPS: 200.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs
L1d cache: 12 MiB (192 instances)
L1i cache: 12 MiB (192 instances)
L2 cache: 96 MiB (192 instances)
L3 cache: 192 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
NUMA node4 CPU(s): 96-119
NUMA node5 CPU(s): 120-143
NUMA node6 CPU(s): 144-167
NUMA node7 CPU(s): 168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.1.0
[pip3] torch==2.1.0
[pip3] torch-npu==2.1.0.post11.dev20250116
[pip3] torchmetrics==1.6.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 2.1.0 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
[conda] torch-npu 2.1.0.post11.dev20250116 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
```
cc @fritzo @neerajprad @alicanb @nikitaved @pbelevich
| true
|
2,888,642,417
|
Remove unneeded Clang-tidy suppression
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,888,598,000
|
[PT2E x86 & Intel GPU] Collapse dim in qlinear_pointwise_binary fusion
|
ZhiweiYan-96
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 1
|
COLLABORATOR
|
# Motivation
Currently, most of `qlinear+add` path would hit fusion `qlinear_pointwise_binary` with `sum` as post op. But it has not collapse the input dim when `dim>2`. This PR intends to trigger dimension collapse in qlinear_bianry for 3D linear cases.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148245
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,597,034
|
[inductor] `nn.Upsample-torch.linalg.lu_factor` outputs inconsistent results with eager
|
shaoyuyoung
|
closed
|
[
"high priority",
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom description**: `nn.Upsample-torch.linalg.lu_factor` outputs inconsistent results with eager. Note that trigger condition is `scale_factor>=2`.
**device backend**: both CPP and triton
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
import os
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(nn.Module):
def __init__(self):
super().__init__()
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear')
def forward(self, x):
x = self.upsample(x)
x, _ = torch.linalg.lu_factor(x)
return x
model = Model().eval().cuda()
x = torch.randn(1, 1, 64, 64).cuda()
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(torch.allclose(output, c_output, 1e-3, 1e-3, equal_nan=True))
print(torch.max(torch.abs(output - c_output)))
```
### Error logs
CPP
```
False
tensor(17.1822)
```
triton
```
False
tensor(12.8162, device='cuda:0')
```
### Versions
nightly 20250225
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bdhirsh
| true
|
2,888,593,972
|
[fx] Move map_aggregate to C++
|
jansel
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148292
* #148288
* #148261
* #148260
* __->__ #148243
Microbenchmarking `fx.symbolic_trace(lambda x: functools.reduce(operator.add, [x, *range(100000)]))`, before:
```
30603618 function calls (29403419 primitive calls) in 13.744 seconds
```
after:
```
25203549 function calls (24403352 primitive calls) in 12.090 seconds
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,888,592,165
|
[FSDP2] Unclear behavior of `ignored_params` in `fully_shard`
|
leonardo0lyj
|
closed
|
[
"oncall: distributed",
"module: docs",
"module: fsdp"
] | 2
|
NONE
|
Hey Andrew @awgu, as a big fan of FSDP2, it is great to see [`ignored_params`](https://github.com/pytorch/pytorch/blob/6eff6b28e4d09cbf632f79502a8e317bf5b53c34/torch/distributed/fsdp/_fully_shard/_fully_shard.py#L179) supported now 👍:
```python
"""
ignored_params: Optional(Set[nn.Parameter]): The set of parameters that we
don't want to shard with FSDP.
"""
...
managed_modules = _get_managed_modules(modules, ignored_params)
params, buffers = _get_managed_states(managed_modules, ignored_params)
_move_states_to_device(params, buffers, device)
if params:
state._fsdp_param_group = FSDPParamGroup(
params,
modules,
...
)
...
```
So the behavior of `ignored_params` with non-ignored param and all buffers is as follows:
-- after `_get_managed_modules` and `_get_managed_states` --
- ignored params: not managed, not in `params`
- non-ignored params: managed, in `params`
- all buffers: managed, in `buffers`
-- after `_move_states_to_device` ---
- ignored params: on `cpu` or `cuda`
- non-ignored params: on `cuda` (assume mesh is on `cuda`)
- all buffers: on `cuda`
-- after `FSDPParamGroup` ---
- ignored params: untouched tensors on `cpu` or `cuda`
- non-ignored params: sharded tensors, `DTensor(Shard())`, on `cuda`
- all buffers: untouched tensors on `cuda`
However, here comes the:
*problem-1* -- those ignored params can still reside on `cpu` while non-ignored params and buffers have been moved onto `cuda`, which cause error during `forward()` when computing mixed device types!
- the manual solution: rely on users' manual effort of moving those `ignored params` to `cuda` before `forward()`, but it does require extra manual effort; especially the ignored `_move_states_to_device` is not mentioned in the API but only `don't want to shard with FSDP.`
- the automatic solution: let ignored param also enjoy `_move_states_to_device` as well, which guarantees the same device type during `forward()`, hassle free, yo~
*problem-2* -- is there really a difference between `buffers` and `ignored_params` in behavior? IMO, they are both untouched tensors, not fully sharded, but both should be treated as managed (`_get_managed_states`) and move to device (`_move_states_to_device`).
How do think? Appreciated 🙏
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @svekars @sekyondaMeta @AlannaBurke @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,888,587,037
|
[inductor] [cpu] `nn.Tanhshrink-atan2` output inconsistent results with eager
|
shaoyuyoung
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom description**: when using `nn.Tanhshrink-atan2` together, output is inconsistent with eager.
**device backend**: only CPP
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
import os
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.shrink = nn.Tanhshrink()
def forward(self, x):
x = self.shrink(x)
x = torch.atan2(x, x)
return x
model = Model()
x = torch.randn(1, 3, 64, 64)
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(torch.allclose(output, c_output, 1e-3, 1e-3, equal_nan=True))
print(torch.max(torch.abs(output - c_output)))
```
### Error logs
CPP
```
False
tensor(3.1416)
```
Triton
```
True
tensor(0., device='cuda:0')
```
### Versions
nightly 20250225
cc @chauhang @penguinwu
| true
|
2,888,470,940
|
handle jk for emulation runs
|
BoyueZheng
|
open
|
[
"fb-exported",
"Stale",
"module: inductor"
] | 8
|
NONE
|
Summary:
seeing jk error for a platform which has no service network, https://www.internalfb.com/sandcastle/workflow/2260807012946333388/artifact/actionlog.2260807013059769273.stdout.1?selectedLines=1979-1980-1-1
so just fallback if JK is disabled
Test Plan: ez
Reviewed By: openrichardfb
Differential Revision: D70433783
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,448,720
|
[MPS] Fix SDPA crash
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148239
If operation is invoked with mask twice it will crash, as mask expansion logic was implemented inside cache creation block, which is executed only once for all shapes
Fixes https://github.com/pytorch/pytorch/issues/148194 which is a regression introduced by https://github.com/pytorch/pytorch/pull/147545
| true
|
2,888,411,504
|
[ROCm][TunableOp] Add support for rowwise scaling on scaled GEMM.
|
naromero77amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"topic: not user facing",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 4
|
COLLABORATOR
|
This PR adds support for rowwise scaling versus tensorwise scaling on scaled GEMM.
There are few other items included in this PR as well:
- Fixes for offline tuning of scaled GEMM
- Simplification of existing offline UT
- Update existing online UT to also test rowwise versus tensorwise scaled GEMM
- New UT for offline scaled GEMM
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,888,398,710
|
Enable XPU for Inductor MM Triton Kernel Benchmark
|
EikanWang
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148237
#147620 enabled `force_shape_pad` for triton kernel benchmark. Intel GPU supports this scenario. Hence, we need to enable the case in this PR. Otherwise, there would be a test case regression for Intel GPU as #147620 has been landed.
cc @voznesenskym @penguinwu @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,364,304
|
[cutlass backend] try reenable subproc add mm test
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148236
* #148234
* #148233
* #148229
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,362,022
|
Make require_contiguous require exact strides instead of stride order
|
eellison
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148235
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,346,646
|
[cutlass backend] Expand addmm test to AOTI and dynamic shape
|
henrylhtsang
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148234
Fixed one problem with symint, enabled addmm for dynamic shape.
Not every case of addmm is supported yet. The case where bias has shape (N) is not supported yet, for some reason. My hunch is something wrong about stride. But let's try to enable more tests for now.
Thanks @ColinPeppler for expertise on dynamic shapes.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,888,334,274
|
[cutlass backend] fix assertion that prevent self multiplication
|
henrylhtsang
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148233
# Problem:
In a matmul, sometimes some of the nodes are the same. Say `A @ A`. In that case, when writing the stride of node B, we have to figure out if we want lda or ldb, which points to the same node, and we have no way to differentiate which one.
# Solution
Just use whichever. Since they are the same.
# Question
What if we compile with `A @ A`, and then pass in `A @ B`? Well inductor guards will raise an error.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.