id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,925,921,898
|
ONNX: export failure ❌ 1.1s: Exporting the operator 'aten::fft_fft2' to ONNX opset version 17 is not supported.
|
EvelynCarter
|
closed
|
[
"module: onnx",
"triaged"
] | 12
|
NONE
|
### 🚀 The feature, motivation and pitch
Exporting the operator 'aten::fft_fft2' to ONNX opset version 18 is not supported.
Trying to convert torch model to onnx model.
How can I solve this problem?
(Da) D:\Create\YOLOv8-TensorRT-main>E:/anaconda3/envs/Da/python.exe d:/Create/YOLOv8-TensorRT-main/export.py
Ultralytics YOLOv8.2.50 🚀 Python-3.10.16 torch-2.2.2+cu121 CPU (Intel Core(TM) i7-9750H 2.60GHz)
YOLOv8-SOEP summary (fused): 194 layers, 3307670 parameters, 0 gradients, 11.8 GFLOPs
PyTorch: starting from 'v8doep.pt' with input shape (1, 3, 320, 320) BCHW and output shape(s) (1, 6, 2100) (6.6 MB)
ONNX: starting export with onnx 1.17.0 opset 17...
ONNX: export failure ❌ 1.1s: Exporting the operator 'aten::fft_fft2' to ONNX opset version 17 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: h
怎么解决这个问题
ONNX: export failure ❌ 1.1s: Exporting the operator 'aten::fft_fft2' to ONNX opset version 17 is not supported.
### Alternatives
_No response_
PyTorch version: 2.0.0
onnx version: 1.13.1
Python version: 3.8.10
CUDA/cuDNN version: 11.2
GPU models and configuration: RTX 3090 24G
_No response_
| true
|
2,925,878,988
|
Migrate to new theme
|
svekars
|
closed
|
[
"oncall: distributed",
"module: docs",
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: docs",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)",
"suppress-bc-linter"
] | 7
|
CONTRIBUTOR
|
- Migrate pytorch docs, cpp docs and functorch docs to the pytorch_sphinx_theme2
- Migrate index.rst to markdown and restructure to use high-level horizontal bar sections Python API, Developer Notes
- Added python-api.md which becomes the main container for the API docs. This file will be used to add all api references in the toctree. It would be great to have lint for this file: https://github.com/pytorch/pytorch/issues/150718
- Enabled mermaid sphinx extension and opengraph sphinx extension
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @sekyondaMeta @AlannaBurke @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,925,871,968
|
[torch][ao] Do not crash numerics debugger if the shape of the tensors do not match
|
dulinriley
|
open
|
[
"fb-exported",
"ciflow/trunk",
"release notes: quantization",
"release notes: AO frontend"
] | 14
|
CONTRIBUTOR
|
Summary: Occasionally we see the loss function to crash because the shapes of `a` and `b` tensors are different. This diff avoids crashing is such scenarios and lets the comparison work for other nodes where the shapes match.
Test Plan: - CI
Reviewed By: jerryzh168
| true
|
2,925,868,583
|
Expose GIL and GC events in profiler traces
|
cyme
|
open
|
[
"open source"
] | 3
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,925,846,769
|
[codemod][lowrisk] Remove unused exception parameter from caffe2/aten/src/ATen/cuda/CUDABlas.cpp
|
r-barnes
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Summary:
`-Wunused-exception-parameter` has identified an unused exception parameter. This diff removes it.
This:
```
try {
...
} catch (exception& e) {
// no use of e
}
```
should instead be written as
```
} catch (exception&) {
```
If the code compiles, this is safe to land.
Test Plan: Sandcastle
Reviewed By: dtolnay
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,925,822,081
|
[MTIA] Support loading Tensors on mtia:0 for pytorch code
|
zimin2000
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 14
|
CONTRIBUTOR
|
Summary: The diff includes updates to the PyTorch code to enable loading tensors to MTIA.
Reviewed By: PatriceVignola
Differential Revision: D71176848
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,925,805,553
|
[BE]: Update mypy to 1.15
|
Skylion007
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,925,660,009
|
☂️ MPS support for large tensors
|
malfet
|
open
|
[
"module: crash",
"triaged",
"module: 64-bit",
"module: mps"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Random ATen operations that use MPS backend can fail with `Error: total bytes of NDArray > 2**32` or other such errors that expect either tensor size to be less than 4Gb, or total number of elements to be indexable by 32-bit index.
This is umbrella issue to track those (should be searchable by `module: mps`+ `module: 64-bit`)
Need to figure out some tooling to detect broken ops and how to run tests, as allocating large tensors on machines we have right now simply not going to work
Example of existing issues:
- https://github.com/pytorch/pytorch/issues/149261
- https://github.com/pytorch/pytorch/issues/140570
- https://github.com/pytorch/pytorch/issues/122916
- https://github.com/pytorch/pytorch/issues/116769
- https://github.com/pytorch/pytorch/issues/116769
- https://github.com/pytorch/pytorch/issues/143859
### Versions
CI
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,925,594,543
|
Unguarded Usage of Facebook Internal Code?
|
BwL1289
|
open
|
[
"triaged",
"module: third_party",
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
There is a [reference](https://github.com/pytorch/pytorch/blob/c7c3e7732443d7994303499bcb01781c9d59ab58/torch/_inductor/fx_passes/group_batch_fusion.py#L25) to `import deeplearning.fbgemm.fbgemm_gpu.fb.inductor_lowerings`, which we believe to be Facebook internal Python module based on description of this [commit](https://github.com/pytorch/benchmark/commit/e26cd75d042e880676a5f21873f2aaa72e178be1).
It looks like if the module isn't found, `torch` disables some `fbgemm` inductor lowerings.
Is this expected for this code snippet, or should this rely on publicly available `fbgemm`?
### Versions
Looks like this module is used as described above since torch's transition to open-source (at least).
cc @chauhang @penguinwu
| true
|
2,925,560,325
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex128 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 9
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex128&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38877893175).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex128`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,925,524,583
|
[MTIA] Add _mtia_exchangeDevice to MTIA module
|
PatriceVignola
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary: The FlexAttention path uses `_exchange_device`, so it will be needed eventually for MTIA as well.
Test Plan: `buck2 test fbcode//mtia/host_runtime/torch_mtia/tests:test_torch_mtia_api -- test_exchange_device`
Reviewed By: chaos5958
Differential Revision: D70072059
| true
|
2,925,433,227
|
[compile] Switch off inference mode during compilation
|
anijain2305
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148953
* __->__ #149321
PR does following
* Turns `inference_mode` to False and `no_grad` for `convert_frame`, if the inference_mode is on globally.
* Turns off inference_mode for fake tensor prop. This ensures that converting from real inference tensor to a fake tensor removes the inference-ness.
* Graph breaks on is_inference and is_inference_mode_enabled.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,925,397,867
|
[DCP] Avoid in-place update and deepcopy during dudpe
|
saumishr
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 10
|
CONTRIBUTOR
|
Summary:
Avoid in-place update and deepcopy during dudpe. Deepcopy becomes prohibitively expensive with models having a huge number of FQNs. This was manifestd in the Ads 2K experiment as well. Here are the results from the TextRay model in Mitra:
#### Control job with deepcopy regression:
First save ~24.8s
Global step latency is ~7-8s
Test job with the new fix to avoid deepcopy:
First save is ~21s
global step latency ~2s
Test Plan:
```
buck test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/distributed/checkpoint:test_planner
```
https://www.internalfb.com/intern/testinfra/testrun/3940649945104822
Differential Revision: D71245218
cc @LucasLLC @pradeepfn @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,925,294,266
|
Need some sort of pattern matcher recipes somewhere
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
I've been asked this 3x in the last week
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,925,156,351
|
BypassFxGraphCache should have better error message for HOPs (also we should support more)
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
torch._inductor.codecache.BypassFxGraphCache: Can't cache HigherOrderOperators
cc @chauhang @penguinwu
| true
|
2,925,128,104
|
Refactoring Distributed test cases to be device agnostic [2/n]
|
AnantGulati
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Continuing the series from #145222
In this series of PR we intend to refactoring distributed test cases to enable to be completely device agnostic.
These changes will include the following approaches to do the same :
Allowing for multiple device types using instantiate_device_type_test
- Replacing calls to cuda stream with torch.get_device_module(device) wherever it applies
- Skipping set up steps required while using MultiProcessTestCase with DistributedTestBase (https://github.com/pytorch/pytorch/pull/138216) wherever applicable
- Replacing explicit calls to distributed backend (NCCL,HCCL,etc) with get_default_backend_for_device (https://github.com/pytorch/pytorch/pull/140536).
This should result in improvement in usability for all devices
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,925,042,754
|
Improve docker build cleanup on s390x runners
|
AlekseiNikiforovIBM
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Currently it sometimes still leaves a couple of processess running.
| true
|
2,924,838,342
|
How to Retain Computational Graph in torch.func.jvp() for Parameter Gradients?
|
edouardoyallon
|
open
|
[
"module: autograd",
"triaged",
"module: functorch"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
## Help Needed: Making `torch.func.jvp` Work with `torch.autograd.grad`
Hi all,
Thanks so much for all the functionalities of pytorch! I'm trying to make the following code valid (and efficient):
```python
output_values, output_grads = torch.func.jvp(model, input_value, input_grads)
torch.autograd.grad(output_values, tuple(model.parameters()), grad_outputs=output_grads)
```
One way to phrase it is that we have a function $f: \mathbb{R}^d \times \mathbb{R}^m \to \mathbb{R}^p$. Then, given $(x, t_x) \in \mathbb{R}^{d}\times \mathbb{R}^{d}$, the goal is to compute: $y = f(x,w)$, the tangent vector $t_y = D_1 f(x, w).t_x$ and the gradient $t_w = D_2 f(x, w)^T.t_y$, in order to materialize the mapping: $((x, t_x), w) \to ((y, t_y), t_w)$.
Currently, the code fails because `torch.func.jvp()` does not retain the computational graph of the forward pass, which makes sense for the dual vectors associated with the input. However, for example, I know it's possible to efficiently decouple the computation of input gradients and weight gradients by selectively extracting parts of the computational graph.
I'd like to do something similar here. My goal is to develop a procedure that achieves this while requiring only a single forward pass (and freeing unnecessary memory).
Would you have any insights on how to implement this efficiently? I believe it's related to [this paper](https://arxiv.org/pdf/2402.14212), which provides a solution in JAX, but I think it should also be possible in PyTorch.
Any guidance or suggestions would be greatly appreciated—thanks in advance for your help!
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,924,828,478
|
Inductor Incorrectly Handles `torch.view_copy` When Changing dtype
|
WLFJ
|
open
|
[
"triaged",
"module: viewing and reshaping",
"module: python frontend",
"module: edge cases",
"oncall: pt2",
"topic: fuzzer"
] | 3
|
NONE
|
### 🐛 Describe the bug
~The `torch.view_copy` function produces incorrect results in Eager mode when changing the `dtype` of a tensor. However, when using `torch.compile` (Inductor), the results are correct. This suggests a potential bug in the Eager implementation of `torch.view_copy`.~
Compile causes wrong answer in this corner case:
```python
import torch
def f():
res = torch.arange(1, 5, dtype=torch.float32)
res_copy = torch.view_copy(res, dtype=torch.float64)
return res, res_copy
print('@@@@ INDUCTOR @@@@')
res, res_copy = torch.compile(f)()
print('res', res)
print('res_copy', res_copy)
print()
print('@@@@ Eager @@@@')
res, res_copy = f()
print('res', res)
print('res_copy', res_copy)
```
~The output in Eager mode is incorrect:~
Testcase Result:
```
@@@@ INDUCTOR @@@@
res tensor([1., 2., 3., 4.])
res_copy tensor([1., 2., 3., 4.], dtype=torch.float64)
@@@@ Eager @@@@
res tensor([1., 2., 3., 4.])
res_copy tensor([ 2.0000, 512.0001], dtype=torch.float64)
```
### Versions
PyTorch 2.7.0.dev20250218+cu124
cc @albanD @chauhang @penguinwu @ezyang @gchanan @zou3519 @kadeng @msaroufim
| true
|
2,924,788,689
|
Training error: due to torch.load(weights, map_location='cpu') - Weights only load failed.
|
karishmathumu
|
open
|
[
"module: serialization",
"triaged"
] | 2
|
NONE
|
```
sudo apt-get update -y
sudo apt-get install -y python3-pip git
git clone https://github.com/WongKinYiu/yolov9.git /home/ubuntu/yolov9
pip3 install --upgrade 'numpy<2'
#pip install --upgrade torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1
pip3 install -r /home/ubuntu/yolov9/requirements.txt
cd yolov9
mkdir -p weights
wget -P weights -q https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-e.pt
#download dataset from roboflow
pip install -q roboflow
python3
import roboflow
roboflow.login()
rf = roboflow.Roboflow()
project = rf.workspace("roboflow-jvuqo").project("football-players-detection-3zvbc")
version = project.version(8)
dataset = version.download("yolov9")
exit()
python3 train_dual.py --batch 16 --epochs 25 --img 640 --device 0 --min-items 0 --close-mosaic 15 --data football-players-detection-8/data.yaml --weights weights/yolov9-e.pt --cfg models/detect/yolov9-e.yaml --hyp hyp.scratch-high.yaml
```
### IDEAL RESULTS:

_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL models.yolo.DetectionModel was not an allowed global by default. Please use `torch.serialization.add_safe_globals([DetectionModel])` or the `torch.serialization.safe_globals([DetectionModel])` context manager to allowlist this global if you trust this class/function.
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
-------------------------------------------------------------------------------------------------------------------------
### Trying Fix (1) :
- by setting Pytorch with `pip install --upgrade torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1` before installing `pip3 install -r /home/ubuntu/yolov9/requirements.txt` seemed to have stated training on GoogleColab
**ISSUES:**
a) but not all results are achieved in the _**runs/train/exp**_
b) and also resulting in another error: **AttributeError: 'FreeTypeFont' object has no attribute 'getsize'**

-------------------------------------------------------------------------------------------------------------------------
### Trying Fix (2) :
- changing the line 110 to `ckpt = torch.load(weights, map_location='cpu', weights_only=False)` # load checkpoint to CPU to avoid CUDA memory leak
Seemed to have started the training with training_dual.py for yolov9-e.pt and yolov9-eyaml.
**ISSUES:**
a) **AttributeError: 'FreeTypeFont' object has no attribute 'getsize'**
b) **this one has much lesser results in runs/train/exp than that of Fix 1.**


cc @mruberry @mikaylagawarecki
| true
|
2,924,687,966
|
Fix `lr_scheduler` unexpectedly calls `step()` when init argument last_epoch is larger than -1
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"topic: bug fixes",
"release notes: optim"
] | 5
|
CONTRIBUTOR
|
Fixes #102261
## Changes
- Use flag `_is_initial` to replace `self.last_epoch == 0` condition to judge whether `lr` should be initial value
- Add test for `ExponentialLR` checkpoint usecase
## Test Result
```python
pytest -s test/optim/test_lrscheduler.py -vv
```

| true
|
2,924,627,079
|
Change profiler notation from 'b' to 'B'
|
NEGU93
|
open
|
[
"oncall: profiler"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
Change profiler notation from 'b' to 'B' as we are talking about bytes.
Currently, profiles memory announces their consumption like 'Gb'. This is confusing as current notation says 'B' for Bytes and 'b' for bits. (Suffice to google "Bytes vs bits notation").
This generated confusion as it happened [here](https://discuss.pytorch.org/t/cuda-memory-profiling-perculiar-memory-values/204999/2?u=agustin_barrachina)
### Alternatives
_No response_
### Additional context
_No response_
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,924,362,230
|
[windows MSVC] Linker receives broken paths
|
loscrossos
|
closed
|
[
"module: windows",
"triaged",
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
NONE
|
### 🐛 Describe the bug
i am using the torch library (using not compiling) with torch compile enabled though another library (Zonos).
at runtime the code tried to compile some code so i run it though the Visual Studio developer command line.
it crashes wiht this error:
```
File "c:\code\.env\lib\site-packages\torch\_inductor\cpp_builder.py", line 349, in run_compile_cmd
return _run_compile_cmd(cmd_line, cwd)
File "c:\code\.env\lib\site-packages\torch\_inductor\cpp_builder.py", line 343, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CppCompileError: C++ compile error
##comment from mytait: ERROR HERE
Command:
cl /I C:/Program Files/Python310/Include /I c:/code/.env/lib/site-packages/torch/include /I c:/code/.env/lib/site-packages/torch/include/torch/csrc/api/include /I c:/code/.env/lib/site-packages/torch/include/TH /I c:/code/.env/lib/site-packages/torch/include/THC /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/user/AppData/Local/Temp/torchinductor_user/ou/coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.cpp /LD /FeC:/Users/user/AppData/Local/Temp/torchinductor_user/ou/coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.pyd /link /LIBPATH:c:/code/.env/Scripts/libs /LIBPATH:c:/code/.env/lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib
Output:
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34809 for x86
Copyright (C) Microsoft Corporation. All rights reserved.
cl : Command line warning D9025 : overriding '/openmp' with '/openmp:experimental'
cl : Command line warning D9024 : unrecognized source file type 'Files/Python310/Include', object file assumed
coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.cpp
C:/Users/user/AppData/Local/Temp/torchinductor_user/ou/coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.cpp(21): fatal error C1083: Cannot open include file: 'Python.h': No such file or directory
```
which comes from torch including a path with spaces in the linker options `C:/Program Files/Python310/Include` and the linker breaking the path at the space and not recognizing that path: `unrecognized source file type 'Files/Python310/Include`
the paths seem to be generated in ` _get_python_related_args()`, who calls `_get_python_include_dirs()` and other sources.
https://github.com/pytorch/pytorch/blob/1cc5f6b623907579a0e3e172b061391b171b9fa5/torch/_inductor/cpp_builder.py#L819
however If i add a fix there i keep getting errors further down the road, where the linker is passed the path twice now:
```
File "c:\code\.env\lib\site-packages\torch\_inductor\graph.py", line 2068, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2759, in load_by_key_path
mod = _reload_python_module(key, path)
File "c:\code\.env\lib\site-packages\torch\_inductor\runtime\compile_tasks.py", line 45, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "C:\Users\user\AppData\Local\Temp\torchinductor_user\da\cdaz6uiosgas2c5kiizkrk4o54ul5xvalbtiel2zqco6kizuxskm.py", line 30, in <module>
cpp_fused_eq_0 = async_compile.cpp_pybinding(['const int64_t*', 'bool*'], '''
File "c:\code\.env\lib\site-packages\torch\_inductor\async_compile.py", line 233, in cpp_pybinding
return CppPythonBindingsCodeCache.load_pybinding(argtypes, source_code)
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2262, in load_pybinding
return cls.load_pybinding_async(*args, **kwargs)()
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2254, in future
result = get_result()
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2045, in load_fn
result = worker_fn()
File "c:\code\.env\lib\site-packages\torch\_inductor\codecache.py", line 2085, in _worker_compile_cpp
cpp_builder.build()
File "c:\code\.env\lib\site-packages\torch\_inductor\cpp_builder.py", line 1553, in build
status = run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "c:\code\.env\lib\site-packages\torch\_inductor\cpp_builder.py", line 349, in run_compile_cmd
return _run_compile_cmd(cmd_line, cwd)
File "c:\code\.env\lib\site-packages\torch\_inductor\cpp_builder.py", line 343, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CppCompileError: C++ compile error
##comment from mytait: ERROR HERE
Command:
cl /I C:/Program Files/Python310/Include /I C:/Program Files/Python310/Include /I c:/code/.env/lib/site-packages/torch/include /I c:/code/.env/lib/site-packages/torch/include/torch/csrc/api/include /I c:/code/.env/lib/site-packages/torch/include/TH /I c:/code/.env/lib/site-packages/torch/include/THC /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/user/AppData/Local/Temp/torchinductor_user/cr/ccrpput7ecr45fn4hgxfl7sjbl73htzrrpb4ba75fhzc2nhzjla6.cpp /LD /FeC:/Users/user/AppData/Local/Temp/torchinductor_user/cr/ccrpput7ecr45fn4hgxfl7sjbl73htzrrpb4ba75fhzc2nhzjla6.pyd /link /LIBPATH:c:/code/.env/Scripts/libs /LIBPATH:c:/code/.env/lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib
Output:
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34809 for x86
Copyright (C) Microsoft Corporation. All rights reserved.
cl : Command line warning D9025 : overriding '/openmp' with '/openmp:experimental'
cl : Command line warning D9024 : unrecognized source file type 'Files/Python310/Include', object file assumed
ccrpput7ecr45fn4hgxfl7sjbl73htzrrpb4ba75fhzc2nhzjla6.cpp
Microsoft (R) Incremental Linker Version 14.43.34809.0
Copyright (C) Microsoft Corporation. All rights reserved.
/dll
/implib:C:/Users/user/AppData/Local/Temp/torchinductor_user/cr/ccrpput7ecr45fn4hgxfl7sjbl73htzrrpb4ba75fhzc2nhzjla6.lib
/out:C:/Users/user/AppData/Local/Temp/torchinductor_user/cr/ccrpput7ecr45fn4hgxfl7sjbl73htzrrpb4ba75fhzc2nhzjla6.pyd
/LIBPATH:c:/code/.env/Scripts/libs
/LIBPATH:c:/code/.env/lib/site-packages/torch/lib
torch.lib
torch_cpu.lib
torch_python.lib
sleef.lib
Files/Python310/Include
ccrpput7ecr45fn4hgxfl7sjbl73htzrrpb4ba75fhzc2nhzjla6.obj
LINK : fatal error LNK1181: cannot open input file 'Files\Python310\Include.obj'
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 (10.0.26100 64-bit)
GCC version: Could not collect
Clang version: 19.1.1
CMake version: version 3.30.5-msvc23
Libc version: N/A
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX
Nvidia driver version: 572.61
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel
Manufacturer: GenuineIntel
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] triton-windows==3.2.0.post15
[conda] Could not collect
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu
| true
|
2,924,316,429
|
Enable max autotune for AOTInductor benchmark
|
zxd1997066
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 28
|
CONTRIBUTOR
|
With this PR, AOTinductor can choose to run into max-autotune mode when benchmarking.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @chuanqi129
| true
|
2,924,313,602
|
Add RECORD_FUNCTION for aoti_xxx
|
shiyang-weng
|
closed
|
[
"triaged",
"open source",
"module: inductor"
] | 8
|
CONTRIBUTOR
|
Fixes #148650
add RECORD_FUNCTION for aoti_xxx
Example:
```python
import torch
from torch.testing._internal.common_utils import run_tests, TemporaryFileName, TestCase
from torch.utils import ThroughputBenchmark
from contextlib import nullcontext
from torch._dynamo import config
from torch._inductor import config as inductor_config
class TwoLayerNet(torch.jit.ScriptModule):
def __init__(self, D_in, H, D_out):
super().__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(2 * H, D_out)
@torch.jit.script_method
def forward(self, x1, x2):
h1_relu = self.linear1(x1).clamp(min=0)
h2_relu = self.linear1(x2).clamp(min=0)
cat = torch.cat((h1_relu, h2_relu), 1)
y_pred = self.linear2(cat)
return y_pred
class TwoLayerNetModule(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super().__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(2 * H, D_out)
def forward(self, x1, x2):
h1_relu = self.linear1(x1).clamp(min=0)
h2_relu = self.linear1(x2).clamp(min=0)
cat = torch.cat((h1_relu, h2_relu), 1)
y_pred = self.linear2(cat)
return y_pred
Module = TwoLayerNetModule
dtype = torch.bfloat16
config.error_on_recompile = True
inductor_config.cpp_wrapper = True
inductor_config.freezing = True
D_in = 10
H = 5
D_out = 15
B = 8
autocast = dtype != torch.float32
module = Module(D_in, H, D_out)
input = (torch.randn(B, D_in), torch.randn(B, D_in))
with torch.no_grad(), torch.amp.autocast("cpu", enabled=autocast, dtype=dtype):
torch._dynamo.reset()
module(*input)
module = torch.compile(module)
module(*input)
module(*input)
with torch.autograd.profiler.profile() as prof:
with torch.no_grad(), torch.amp.autocast("cpu", enabled=autocast, dtype=dtype):
module(*input)
print(prof.key_averages().table(sort_by="self_cpu_time_total"))
```
Without this patch, profiler not show aoti_xxx:
------------------------------ ------------ ------------ ------------ ------------ ------------ ------------
Torch-Compiled Region: 0/0 80.52% 376.467us 82.69% 386.590us 386.590us 1
TorchDynamo Cache Lookup 17.31% 80.955us 17.31% 80.955us 80.955us 1
aten::empty 2.17% 10.123us 2.17% 10.123us 3.374us 3
------------------------------ ------------ ------------ ------------ ------------ ------------ ------------
With this patch:
---------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
at::native::mkldnn_linear_pointwise_binary 48.54% 269.358us 50.61% 280.830us 93.610us 3
Torch-Compiled Region: 0/0 30.83% 171.110us 85.10% 472.230us 472.230us 1
TorchDynamo Cache Lookup 14.90% 82.692us 14.90% 82.692us 82.692us 1
aten::empty 2.07% 11.472us 2.07% 11.472us 3.824us 3
aoti_torch_empty_strided 1.63% 9.047us 1.63% 9.047us 4.524us 2
aoti_torch_delete_tensor_object 1.00% 5.558us 1.00% 5.558us 0.428us 13
aoti_torch__reinterpret_tensor 0.75% 4.175us 0.75% 4.175us 2.087us 2
aoti_torch_get_data_ptr 0.27% 1.510us 0.27% 1.510us 0.151us 10
---------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,924,292,859
|
[Inductor] `isin` causes AssertionError in Inductor IR but the Case Works Fine In Eager Mode
|
WLFJ
|
closed
|
[
"triaged",
"oncall: pt2"
] | 2
|
NONE
|
### 🐛 Describe the bug
test case:
```python
import torch
print(torch.__version__)
def f(*args):
sym_0, sym_1, sym_2, sym_3 = args
var_941 = torch.arange(start=sym_0, end=sym_1, step=1)
return torch.isin(var_941, sym_2, assume_unique=sym_3)
res = f(0, 1024, 1, True,)
print('eager: ', res)
res = torch.compile(f)(0, 1024, 1, True,)
print('inductor: ', res)
```
### Error logs
```
2.7.0.dev20250218+cu124
eager: tensor([False, True, False, ..., False, False, False])
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250317-bugs/test4.py", line 16, in <module>
res = torch.compile(f)(0, 1024, 1, True,)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 589, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 754, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 739, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1407, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1062, in codegen_and_compile
graph.run(*example_inputs)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1524, in run_node
result = ir.ExternKernel.require_exact_strides(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/ir.py", line 5279, in require_exact_strides
return cls.require_strides(
^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/ir.py", line 5191, in require_strides
as_storage_and_layout(
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/ir.py", line 2386, in as_storage_and_layout
return as_storage_and_layout(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/ir.py", line 2404, in as_storage_and_layout
x.data.freeze_layout_with_exact_strides(
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/ir.py", line 3787, in freeze_layout_with_exact_strides
self.layout = self.layout.as_exact_strides(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/ir.py", line 3500, in as_exact_strides
return FixedLayout(
^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_inductor/ir.py", line 3192, in __init__
assert len(size) == len(stride), f"size={size}, stride={stride}"
^^^^^^^^^^^^^^^^^^^^^^^^
torch._inductor.exc.InductorError: AssertionError: size=[], stride=[1]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
PyTorch 2.7.0.dev20250218+cu124
cc @chauhang @penguinwu
| true
|
2,924,269,241
|
`context_parallel` fails for training with `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation`
|
ydshieh
|
open
|
[
"oncall: distributed",
"triaged",
"module: context parallel"
] | 10
|
NONE
|
### 🐛 Describe the bug
Hi, I am from Hugging Face and we are trying to use `context_parallel` (using `stable` and `nightly torch`). However, for training, it fails with
> RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
I have created a reproducible with minimal example where a very simple model `DummyModel` is defined in the script. The same error occurs for a real model (Qwen 2.5) too.
The same error happens for both `SDPBackend.FLASH_ATTENTION` and ` SDPBackend.EFFICIENT_ATTENTION`.
### To reproduce
Run the following script, on a multiple GPU machine (I am using a single cloud machine with 4 A10 GPU), as
1. python script.py
2. torchrun --nproc-per-node=2 script.py --distributed
3. torchrun --nproc-per-node=2 script.py --distributed --use-cp
where 1. (not using any distributed stuff) and 2. (distributed, without CP) succeed and **3. (distributed with CP) fails**.
### script.py
```python
import torch
torch.autograd.set_detect_anomaly(True)
class DummyOutput:
def __init__(self, loss, logits, attn_out):
self.loss = loss
self.logits = logits
self.attn_out = attn_out
def __str__(self):
return str({"loss": self.loss, "logits": self.logits, "attn_out": self.attn_out})
class DummyModel(torch.nn.Module):
def __init__(self, vocab_size, hidden_dim, n_heads, is_causal=True):
super().__init__()
self.vocab_size = vocab_size
self.hidden_dim = hidden_dim
self.n_heads = n_heads
self.head_dim = hidden_dim // n_heads
self.is_causal = is_causal
self.embedding = torch.nn.Embedding(num_embeddings=vocab_size, embedding_dim=self.hidden_dim)
self.linear = torch.nn.Linear(hidden_dim, hidden_dim)
self.q = torch.nn.Linear(hidden_dim, hidden_dim)
self.k = torch.nn.Linear(hidden_dim, hidden_dim)
self.v = torch.nn.Linear(hidden_dim, hidden_dim)
self.atnn_out = torch.nn.Linear(hidden_dim, hidden_dim)
self.proj = torch.nn.Linear(hidden_dim, vocab_size)
# h being [batch_size, seq_len, hidden_dim]
# we convert it to q, k, v here
def forward(self, input_ids, labels=None):
embeddings = self.embedding(input_ids)
hidden_states = self.linear(embeddings)
# we need to change it to q, k, v with [batch_size, n_head, seq_len, head_dim]
# first, projection to get to [batch_size, seq_len, head_dim]
q = self.q(hidden_states)
k = self.k(hidden_states)
v = self.v(hidden_states)
batch_size = 1
# reshape to [batch_size, n_head, seq_len, head_dim]
q = q.view(batch_size, -1, self.n_heads, self.head_dim).transpose(1, 2)
k = k.view(batch_size, -1, self.n_heads, self.head_dim).transpose(1, 2)
v = v.view(batch_size, -1, self.n_heads, self.head_dim).transpose(1, 2)
attn_out = F.scaled_dot_product_attention(q, k, v, is_causal=self.is_causal)
# back to [batch_size, n_head, seq_len, head_dim]
# need contiguous for training
hidden = attn_out.transpose(1, 2).contiguous().view(batch_size, -1, self.n_heads * self.head_dim)
atnn_out = self.atnn_out(hidden)
logits = self.proj(atnn_out)
loss = None
if labels is not None:
loss = torch.nn.functional.cross_entropy(logits.transpose(1, 2), labels)
return DummyOutput(loss=loss, logits=logits, attn_out=attn_out)
def check(distributed=False, use_cp=False):
device = "cuda"
dtype = torch.bfloat16
sdpa_backend = SDPBackend.FLASH_ATTENTION
is_causal = True
input_ids = torch.randint(low=8, high=64, size=(1, 64), device=device)
labels = torch.clone(input_ids)
model = DummyModel(vocab_size=128, hidden_dim=128, n_heads=4, is_causal=is_causal)
model = model.to(device, dtype=dtype)
model.eval()
if distributed:
dist.broadcast(input_ids, src=0)
dist.broadcast(labels, src=0)
rank = torch.distributed.get_node_local_rank()
model = DDP(model, device_ids=[rank])
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
model.train()
for step in range(3):
model.zero_grad()
optimizer.zero_grad()
with sdpa_kernel(sdpa_backend):
if use_cp:
with context_parallel(
cp_mesh, buffers=(input_ids, labels), buffer_seq_dims=(1, 1)
):
outputs = model(input_ids, labels=labels)
else:
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
print(f"device: {loss.device} | step: {step} | loss = {loss.detach().to('cpu').float().numpy()}")
loss.backward()
optimizer.step()
if __name__ == '__main__':
# python3 temp.py
# torchrun --nproc-per-node=2 temp.py --distributed
# torchrun --nproc-per-node=2 temp.py --distributed --use_cp
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--distributed", action="store_true", default=False)
parser.add_argument("--use-cp", action="store_true", default=False)
parser.add_argument("--nproc-per-node", type=int, default=1)
args = parser.parse_args()
import torch
import torch.nn.functional as F
from torch.nn.attention import sdpa_kernel, SDPBackend
distributed = args.distributed
use_cp = args.use_cp
if distributed:
from torch.distributed.device_mesh import init_device_mesh
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.distributed as dist
if use_cp:
from torch.distributed.tensor.experimental import context_parallel
world_size = args.nproc_per_node
cp_mesh = init_device_mesh("cuda", (world_size,))
check(distributed=distributed, use_cp=use_cp)
```
### Error log
```bash
root@dff7b35823a9:/transformers# torchrun --nproc-per-node=2 script.py --distributed --use-cp
W0317 08:57:27.892000 1659 torch/distributed/run.py:766]
W0317 08:57:27.892000 1659 torch/distributed/run.py:766] *****************************************
W0317 08:57:27.892000 1659 torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0317 08:57:27.892000 1659 torch/distributed/run.py:766] *****************************************
[rank1]: Traceback (most recent call last):
[rank1]: File "/transformers/script.py", line 149, in <module>
[rank1]: check(distributed=distributed, use_cp=use_cp)
[rank1]: File "/transformers/script.py", line 105, in check
[rank1]: with context_parallel(
[rank1]: File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
[rank1]: return next(self.gen)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 36, in generator_context
[rank1]: response = gen.send(None)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/experimental/_attention.py", line 1345, in context_parallel
[rank1]: chunks = _context_parallel_buffers(mesh, buffers, buffer_seq_dims)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/experimental/_attention.py", line 1287, in _context_parallel_buffers
[rank1]: new_buffers.append(sharder.shard(buffer, mesh, seq_dim))
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/tensor/experimental/_attention.py", line 1244, in shard
[rank1]: cp_rank = mesh.get_local_rank()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/device_mesh.py", line 946, in get_local_rank
[rank1]: mesh_dim_group = not_none(self.get_group(mesh_dim))
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/device_mesh.py", line 781, in get_group
[rank1]: _find_pg_by_ranks_and_tag(*self._dim_group_infos[mesh_dim][:2]) # type: ignore[index]
[rank1]: IndexError: list index out of range
device: cuda:0 | step: 0 | loss = 4.84375
/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py:824: UserWarning: Error detected in NllLoss2DBackward0. Traceback of forward call that caused the error:
File "/transformers/script.py", line 149, in <module>
check(distributed=distributed, use_cp=use_cp)
File "/transformers/script.py", line 108, in check
outputs = model(input_ids, labels=labels)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1637, in forward
else self._run_ddp_forward(*inputs, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1464, in _run_ddp_forward
return self.module(*inputs, **kwargs) # type: ignore[index]
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/transformers/script.py", line 68, in forward
loss = torch.nn.functional.cross_entropy(logits.transpose(1, 2), labels)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 3494, in cross_entropy
return torch._C._nn.cross_entropy_loss(
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:122.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: Traceback (most recent call last):
[rank0]: File "/transformers/script.py", line 149, in <module>
[rank0]: check(distributed=distributed, use_cp=use_cp)
[rank0]: File "/transformers/script.py", line 115, in check
[rank0]: loss.backward()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_tensor.py", line 648, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 353, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 1, 64]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
[rank0]:[W317 08:57:31.906052155 ProcessGroupNCCL.cpp:1497] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
W0317 08:57:31.821000 1659 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1708 closing signal SIGTERM
E0317 08:57:31.985000 1659 torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 1 (pid: 1709) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 892, in main
run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 883, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
script.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-03-17_08:57:31
host : dff7b35823a9
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 1709)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
### Versions
```bash
PyTorch version: 2.8.0.dev20250315+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.20.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.234-225.895.amzn2.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
GPU 2: NVIDIA A10G
GPU 3: NVIDIA A10G
Nvidia driver version: 550.144.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R32
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
BogoMIPS: 5599.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_ts
c rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy
abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 96 MiB (6 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel-extension-for-pytorch==2.3.0
[pip3] mypy-extensions==1.0.0
[pip3] natten==0.15.1+torch220cu121
[pip3] numpy==1.24.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxconverter-common==1.13.0
[pip3] onnxruntime==1.21.0
[pip3] onnxruntime-tools==1.7.0
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] tf2onnx==1.16.1
[pip3] torch==2.8.0.dev20250315+cu126
[pip3] torchaudio==2.6.0.dev20250315+cu126
[pip3] torchvision==0.22.0.dev20250315+cu126
[pip3] triton==3.2.0
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,924,262,398
|
[Submodule] [cpuinfo] cpuinfo update
|
ozanMSFT
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel",
"ciflow/binaries_libtorch"
] | 3
|
COLLABORATOR
|
Updating `cpuinfo` module.
Relevant:
https://github.com/pytorch/cpuinfo/issues/270
| true
|
2,924,253,699
|
Machine with isolated cores is not respected by torch
|
oeliyahoo
|
open
|
[
"module: multiprocessing",
"module: cpu",
"triaged"
] | 4
|
NONE
|
### 🐛 Describe the bug
First define your last cores as isolated on your system
e.g in my machine:
lscpu output:
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
cat /sys/devices/system/cpu/isolated output:
52-55,108-111,164-167,220-223
I defined it with:
sudo nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="iommu=1 intel_iommu=on iommu=pt kvm.ignore_msrs=1 isolcpus=52-55,108-111,164-167,220-223 nohz_full=52-55,108-111,164-167,220-223 rcu_nocbs=52-55,108-111,164-167,220-223"
sudo update-grub
sudo reboot
now try to run the script below once with import torch and once without:
you will see the if you import torch your isolated cores will not be respected
```python
import multiprocessing as mp
import time
import psutil
import torch # run it with and without this line see which cores it allocate for the threads.
def worker(worker_id):
pid = mp.current_process().pid
print(f"Worker {worker_id} starting (PID: {pid}).")
start_time = time.time()
# Print the current CPU core every 10 seconds for 30 seconds
while time.time() - start_time < 30:
# psutil.Process().cpu_num() returns the CPU core this process is currently running on.
current_core = psutil.Process().cpu_num()
print(f"Worker {worker_id} (PID: {pid}) is currently running on CPU core {current_core}.")
time.sleep(10)
print(f"Worker {worker_id} finished.")
if __name__ == '__main__':
# Force the spawn start method (this triggers the internal spawn_main call)
mp.set_start_method("spawn", force=True)
processes = []
num_processes = 8
for i in range(num_processes):
p = mp.Process(target=worker, args=(i,))
p.start()
processes.append(p)
for p in processes:
p.join()
print("All processes have finished.")
```
### Versions
on any version.
cc @VitalyFedyunin @albanD @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,924,119,545
|
Allow Privateuse1 device for validating the arguments to sparse compressed tensor factory functions
|
ClowDragon
|
closed
|
[
"triaged",
"module: PrivateUse1"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
https://github.com/pytorch/pytorch/commit/bb8c4ecc6d7d546d553c0883422c53705355236d
similar to this commit, can we allow privateuse1 device to pass this TORCH_CHECK?
### Alternatives
_No response_
### Additional context
_No response_
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens @albanD
| true
|
2,924,090,242
|
Reduce memory consumption in broadcast bmm
|
zheyishine
|
closed
|
[
"module: cuda",
"module: memory usage",
"triaged",
"module: linear algebra",
"needs design",
"matrix multiplication"
] | 4
|
NONE
|
### 🚀 The feature, motivation and pitch
Here is a minimal example, consumes about 66GiB CUDA memory (I guess it may expand `b` to [8192,32,1024,128] before calculation). Is it possible to reduce the memory consumption without expanding?
`a=torch.rand((8192,32,1,1024),dtype=torch.bfloat16,device='cuda:0')`
`b=torch.rand((1,32,1024,128),dtype=torch.bfloat16,device='cuda:0')`
`c=torch.matmul(a,b)`
Versions:
torch: 2.6.0+cu126
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,924,073,498
|
Unexpected results w/ LayerNorm -- suspecting possible memory issue?
|
Apex95
|
open
|
[
"module: numerical-stability",
"module: cuda",
"triaged",
"module: 64-bit",
"module: norms and normalization"
] | 4
|
NONE
|
### 🐛 Describe the bug
I'm noticing an interesting behaviour of LayerNorm when applied to large 4d tensors (bf16) when normalized shape is an int (i.e., normalizing over the final dimension of the tensor).
What I'm seeing is that the size of the first dimension (batch size) can impact the normalized values of existing samples. To be specific, if A and B are 2 input tensors (4d), where B = A[:-1], then after passing both through the LayerNorm layer, there's a difference between A[:-1] and B even though B is a subset of A. It's almost as if LayerNorm has some memory-access issue?
This does not happen if I run smaller tensors through this operation or if I run this through a manual normalization (using torch.mean() and unbiased torch.var()).
A code that reproduces this on A100/40GB would be something like:
```python
import torch
import torch.nn.functional as F
torch.manual_seed(1337)
device = torch.device('cuda')
USE_MANUAL_LAYERNORM = False
ln1 = torch.nn.LayerNorm(768).to(device).to(torch.bfloat16)
ln1.eval()
@torch.inference_mode()
def test():
x = torch.rand((24, 493, 768), device=device, dtype=torch.bfloat16)
x1 = x[:-1]
print('-----------------------')
print(f'x shape: {x.shape} | x1 shape: {x1.shape}')
print('> Max Diff at input:\n', (x[:-1]-x1[:]).abs().max())
x = torch.tanh(x)
x1 = torch.tanh(x1)
x = x[:, :, None, :] + x[:, None, :, :]
x1 = x1[:, :, None, :] + x1[:, None, :, :]
print('> Max Diff after broadcast:\n', (x[:-1]-x1[:]).abs().max())
x = F.gelu(x)
x1 = F.gelu(x1)
print('> Max Diff after non-linearity:\n', (x[:-1]-x1[:]).abs().max())
_x = ln1(x[:, :, 0, :])
_x1 = ln1(x1[:, :, 0, :])
print('> Max Diff after 3d layernorm:\n', (_x[:-1]-_x1[:]).abs().max())
if USE_MANUAL_LAYERNORM:
x = (x - x.mean(dim=-1, keepdim=True)) / torch.sqrt(torch.var(x, dim=-1, keepdim=True, correction=1)+1e-8)
x1 = (x1 - x1.mean(dim=-1, keepdim=True)) / torch.sqrt(torch.var(x1, dim=-1, keepdim=True, correction=1)+1e-8)
print(x[0,:2, :2, :2])
print(x1[0,:2, :2, :2])
print('> Max Diff after manual 4d layernorm:\n', (x[:-1]-x1[:]).abs().max())
else:
x = ln1(x)
x1 = ln1(x1)
print(x[0,:2, :2, :2])
print(x1[0,:2, :2, :2])
print('> Max Diff after 4d layernorm:\n', (x[:-1]-x1[:]).abs().max())
test()
```
Which yields, for a large tensor:
```
x shape: torch.Size([24, 493, 768]) | x1 shape: torch.Size([23, 493, 768])
> Max Diff at input:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after broadcast:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after non-linearity:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after 3d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
[...]
> Max Diff after 4d layernorm:
tensor(0.6172, device='cuda:0', dtype=torch.bfloat16)
```
Doing this with `USE_MANUAL_LAYERNORM = True` gives:
```
x shape: torch.Size([24, 493, 768]) | x1 shape: torch.Size([23, 493, 768])
> Max Diff at input:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after broadcast:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after non-linearity:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after 3d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
[...]
> Max Diff after manual 4d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
```
Also, for a smaller tensor (i.e., x.shape = (24, 200, 768)):
```
x shape: torch.Size([24, 200, 768]) | x1 shape: torch.Size([23, 200, 768])
> Max Diff at input:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after broadcast:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after non-linearity:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
> Max Diff after 3d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
[...]
> Max Diff after 4d layernorm:
tensor(0., device='cuda:0', dtype=torch.bfloat16)
```
Please let me know if there's any mistake in my understanding of this.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.11.11 | packaged by conda-forge | (main, Mar 3 2025, 20:43:55) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.240
BogoMIPS: 4400.48
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 6 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @ptrblck @msaroufim @eqy
| true
|
2,924,049,332
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,923,991,531
|
update aotinductor doc for XPU support
|
jingxu10
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
as title. Since the AOTInductor feature starting from 2.7 works on Intel GPU, add the related contents into its doc.
| true
|
2,923,955,014
|
[invoke_subgraph] Support unbacked
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149298
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D71420641](https://our.internmc.facebook.com/intern/diff/D71420641)
| true
|
2,923,954,905
|
[invoke_subgraph] Support pending unbacked symint
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149298
* __->__ #149297
* #149296
The "PendingUnbackedSymbolNotFound" error is when an unbacked symbol is created within a piece of code, but this symbol never appears in any of the outputs. I believe the original intention is to help catch incorrectly written meta kernels, where users might've unintentionally created an unbacked symbol but never used it anywhere, but in our case this is intentional. An example is the following test case:
```python
def test_pending_unbacked(self):
class M(torch.nn.Module):
@mark_compile_region
def gn(self, x):
u = x[0].item()
return x * u
def forward(self, x):
for _ in range(4):
x = self.gn(x)
return x
torch._dynamo.config.capture_scalar_outputs = True
torch.compile(M())(torch.randn(8))
```
This fails with the error:
```
torch._dynamo.exc.InternalTorchDynamoError: PendingUnbackedSymbolNotFound: Pending unbacked symbols {zuf1} not in returned outputs (FakeTensor(..., size=(8,)),) .
```
In this case, creating the unbacked symbol is intentional, so we can bypass this using `fake_mode.shape_env.ignore_fresh_unbakced_symbols()`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D71298926](https://our.internmc.facebook.com/intern/diff/D71298926)
| true
|
2,923,954,786
|
[export] Add mark_compiled_region support
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149298
* #149297
* __->__ #149296
Differential Revision: [D71298930](https://our.internmc.facebook.com/intern/diff/D71298930)
| true
|
2,923,954,690
|
[export] Patch dynamo configs when nonstrict tracing
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149295
Differential Revision: [D71298929](https://our.internmc.facebook.com/intern/diff/D71298929)
| true
|
2,923,954,578
|
[export] Add TracingContext
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149295
* __->__ #149294
TracingContext is added to all tracing locations -- in torch.export this is where we call make_fx (for training IR) and aot_export_module (for inference IR), and in run_decompositions where we call aot_export_module
Differential Revision: [D71298927](https://our.internmc.facebook.com/intern/diff/D71298927)
| true
|
2,923,921,839
|
When resuming training from a breakpoint using FSDP, an Out of Memory (OOM) error will occur during the model loading process.
|
nomadlx
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 4
|
NONE
|
### 🐛 Describe the bug
The original training was completely normal and there was no Out of Memory (OOM) issue. However, if the training is interrupted and when I try to resume the training from the last checkpoint, an unexpected OOM will occur during the loading of the FSDP model.
Compared with the code for training from scratch, the code for resuming training from a breakpoint will load the FSDP model and parameters after the model is initialized and constructed. I have carefully checked the code and confirmed that offload_to_cpu=True has been configured during the FSDP loading. As expected, it should not additionally increase the usage of the video memory. But before and after the FSDP loading, I captured an increase in the video memory usage.
The following is the loading code and the difference in the video memory usage before and after the loading.
```python
log_gpu_memory_usage('fsdp checkpoint load 1.2', logger=None)
state_dict_cfg = ShardedStateDictConfig(offload_to_cpu=True)
optim_cfg = ShardedOptimStateDictConfig(offload_to_cpu=True)
with FSDP.state_dict_type(self.model, StateDictType.SHARDED_STATE_DICT, state_dict_cfg, optim_cfg):
self.model.load_state_dict(model_state_dict)
if self.optimizer is not None:
self.optimizer.load_state_dict(optimizer_state_dict)
log_gpu_memory_usage('fsdp checkpoint load 1.3', logger=None)
```
log:
```
fsdp checkpoint load 1.2, memory allocated (GB): 52.07300519943237, memory reserved (GB): 61.4921875
fsdp checkpoint load 1.3, memory allocated (GB): 65.82867097854614, memory reserved (GB): 67.294921875
```
Is this a bug? For example, are the parameter matrices initialized internally originally not being destroyed? Or are there some matters that need attention to avoid such a situation?
### Versions
pytorch==2.4.0
cuda==12.1
I enabled the offload parameter and the offload optimizer.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,923,808,062
|
[CPU] Fix ARM float32 fmsub
|
zhangfeiv0
|
closed
|
[
"triage review",
"triaged",
"module: vectorization",
"module: correctness (silent)",
"module: arm"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
In `vec_base.h`, the [fmsub](https://github.com/pytorch/pytorch/blob/916e8979d3e0d651a9091732ce3e59da32e72b0e/aten/src/ATen/cpu/vec/vec_base.h#L987) function implements the functionality of `a * b - c`. However, in `vec128_float_neon.h`, it uses [vfmsq_f32](https://github.com/pytorch/pytorch/blob/916e8979d3e0d651a9091732ce3e59da32e72b0e/aten/src/ATen/cpu/vec/vec128/vec128_float_neon.h#L543). According to the [manual](https://developer.arm.com/architectures/instruction-sets/intrinsics/#f:@navigationhierarchiesreturnbasetype=[float]&f:@navigationhierarchiessimdisa=[Neon]&q=vfmsq_f32), for the input order `c, a, b`, it implements `c - a * b`, which results in the opposite outcome.
Below is the testing environment:
```plaintext
# lscpu
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: HiSilicon
BIOS Vendor ID: QEMU
Model name: Kunpeng-920
BIOS Model name: virt-rhel8.2.0 CPU @ 2.0GHz
BIOS CPU family: 1
Model: 0
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
Stepping: 0x1
BogoMIPS: 200.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jsc
vt fcma dcpop asimddp asimdfhm
```
The running result of `./build/bin/vec_test_all_types_DEFAULT`:
```plaintext
[ RUN ] BitwiseFloatsAdditional/0.Fmsub
/root/pytorch/aten/src/ATen/test/vec_test_all_types.h:880: Failure
Expected equality of these values:
nearlyEqual<UVT>(expArr[i], actArr[i], absErr)
Which is: false
true
24127.314453125!=-24127.314453125
Failure Details:
fmsub "/root/pytorch/aten/src/ATen/test/vec_test_all_types.cpp":940
Test Seed to reproduce: 1742184039034921432
Arguments:
# vec[-108.478317, 430.048676, 439.19342, 111.896461]
# vec[443.884338, 151.219467, -189.899826, -492.905579]
Expected:
# vec[24127.3145, 257714.359, -55222.8984, 15263.2383]
Actual:
# vec[-24127.3145, -257714.359, 55222.8945, -15263.2383]
First mismatch Index: 0
```
Modify the NEON implementation to:
```c++
template <>
Vectorized<float> inline fmsub(const Vectorized<float>& a, const Vectorized<float>& b, const Vectorized<float>& c) {
return Vectorized<float>(vnegq_f32(vfmsq_f32(c, a, b)));
}
```
The running result of `./build/bin/vec_test_all_types_DEFAULT` after the modification:
```plaintext
[----------] 6 tests from BitwiseFloatsAdditional/0, where TypeParam = at::vec::DEFAULT::Vectorized<float>
[ RUN ] BitwiseFloatsAdditional/0.ZeroMask
[ OK ] BitwiseFloatsAdditional/0.ZeroMask (0 ms)
[ RUN ] BitwiseFloatsAdditional/0.Convert
[ OK ] BitwiseFloatsAdditional/0.Convert (0 ms)
[ RUN ] BitwiseFloatsAdditional/0.Fmadd
[ OK ] BitwiseFloatsAdditional/0.Fmadd (78 ms)
[ RUN ] BitwiseFloatsAdditional/0.Fmsub
[ OK ] BitwiseFloatsAdditional/0.Fmsub (78 ms)
[ RUN ] BitwiseFloatsAdditional/0.FmaddVecN
[ OK ] BitwiseFloatsAdditional/0.FmaddVecN (79 ms)
[ RUN ] BitwiseFloatsAdditional/0.Blendv
[ OK ] BitwiseFloatsAdditional/0.Blendv (0 ms)
[----------] 6 tests from BitwiseFloatsAdditional/0 (236 ms total)
```
### Versions
main
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01
| true
|
2,923,793,179
|
[xpu] Adds XPU support in OffsetBasedRNGTracker
|
pkourdis
|
closed
|
[
"oncall: distributed",
"open source",
"topic: not user facing"
] | 5
|
NONE
|
Met this error during training with torchtitan:
```
x4008c6s6b0n0.hostmgmt2008.cm.aurora.alcf.anl.gov 6: [rank6]: File "/lus/flare/projects/Aurora_deployment/pkourdis/conda/envs/pytorch/lib/python3.12/site-packages/torch/distributed/tensor/_random.py", line 82, in manual_seed
x4008c6s6b0n0.hostmgmt2008.cm.aurora.alcf.anl.gov 6: [rank6]: _rng_tracker = OffsetBasedRNGTracker(device_mesh, run_state_sync=False)
x4008c6s6b0n0.hostmgmt2008.cm.aurora.alcf.anl.gov 6: [rank6]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
x4008c6s6b0n0.hostmgmt2008.cm.aurora.alcf.anl.gov 6: [rank6]: File "/lus/flare/projects/Aurora_deployment/pkourdis/conda/envs/pytorch/lib/python3.12/site-packages/torch/distributed/tensor/_random.py", line 174, in __init__
x4008c6s6b0n0.hostmgmt2008.cm.aurora.alcf.anl.gov 6: [rank6]: raise RuntimeError(
x4008c6s6b0n0.hostmgmt2008.cm.aurora.alcf.anl.gov 6: [rank6]: RuntimeError: OffsetBasedRNGTracker instantiation requires the presence of CUDA/CUDA-like device. Got xpu instead
```
I was able to successfully run training with `XPU/XCCL` backends with this fix.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,923,656,353
|
as_subclass doesn't work under TorchDispatchMode
|
pritamdamania87
|
closed
|
[
"triaged",
"module: __torch_dispatch__"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We have a torch.Tensor subclass which shares autograd history with a passed in `data` torch.Tensor using the `as_subclass` method. This works well except in the case where we use `TorchDispatchMode`:
```
import torch
from torch.utils._python_dispatch import TorchDispatchMode
class Foo(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args, kwargs=None):
return func(*args, **(kwargs or {}))
class MyTensor(torch.Tensor):
def __new__(cls, data: torch.Tensor):
return data.as_subclass(cls)
t1 = torch.rand(10, requires_grad=True)
t2 = t1 + t1
m1 = MyTensor(t2)
with Foo():
m2 = MyTensor(t2)
```
This fails, with the following error:
```
Traceback (most recent call last):
File "test.py", line 18, in <module>
m2 = MyTensor(t2)
^^^^^^^^^^^^
File "test.py", line 11, in __new__
return data.as_subclass(cls)
^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Creating a new Tensor subclass MyTensor but the raw Tensor object is already associated to a python object of type Tensor
```
We can't use `make_subclass` or `make_wrapper_subclass` since those lose the autograd history of the passed in Tensor. Is there anyway to achieve what we're looking?
### Versions
2.4
cc @Chillee @ezyang @zou3519 @albanD @samdow
| true
|
2,923,635,342
|
Fix the invalid link for FX
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149289
As the title stated.
| true
|
2,923,626,898
|
add support for numpy
|
tugsbayasgalan
|
open
|
[
"fb-exported",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149288
* #148488
* #148485
Differential Revision: [D71294355](https://our.internmc.facebook.com/intern/diff/D71294355/)
| true
|
2,923,534,776
|
[AOTI][refactor] Remove dead code
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149287
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,923,465,903
|
Replace c10::is_pod with std::is_trivial
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
These remaining c10::is_pod calls can be replaced without compromising the semantics.
| true
|
2,923,462,583
|
added documentation for masked_fill and masked_fill_
|
julurisaichandu
|
open
|
[
"triaged",
"open source",
"release notes: python_frontend",
"topic: docs"
] | 6
|
NONE
|
Fixes #149284
Added examples in the documentation of masked_fill and masked_fill_
| true
|
2,923,461,946
|
No examples in documentation for masked_fill and masked_fill_
|
julurisaichandu
|
open
|
[
"module: docs",
"triaged",
"topic: docs"
] | 0
|
NONE
|
### 📚 The doc issue
No examples in documentation for masked_fill and masked_fill_
masked_fill - https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill.html
masked_fill_- https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill_.html
### Suggest a potential alternative/fix
add example functions for both of them
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
2,923,372,775
|
Avoid unnecessary clone in torch.cuda.set_rng_state
|
ppwwyyxx
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Clone has performance issue according to https://github.com/NVIDIA/Megatron-LM/blob/f49c3eb6e6d55ab6ffb085cb48d95c2d40276c3f/megatron/core/tensor_parallel/random.py#L77-L80
| true
|
2,923,312,684
|
[cuDNN][SDPA] cuDNN SDPA refactor/cleanup, nested tensor backward, test priority bump for `sm90`, `sm100`
|
eqy
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: sdpa"
] | 8
|
COLLABORATOR
|
cleanup tuple/tensor boilerplate in cuDNN SDPA, preparation for nested/ragged tensor backward
| true
|
2,923,289,434
|
"Significant" Numerical differences for different tensor shapes in CUDA
|
HaoxiangYou
|
closed
|
[
"module: numerical-stability",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
I get a very different results when I do calculation on cuda with tensor operation order, below is a sample code for reproduce:
```
import torch
import torch.nn as nn
sequence_size = 32
env_size = 64
input_dim = 39
hidden_dim = 64
output_dim = 6
device = "cuda"
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
batch_input = torch.randn((sequence_size, env_size, input_dim), dtype=torch.float32, device=device)
model = nn.Linear(in_features=input_dim, out_features=output_dim, device=device)
batch_output = model(batch_input)
print("big batch together:", batch_output[0,0])
print("smaller batch:", model(batch_input[0])[0])
```
The output is

where, the big difference is around 3e-4 which is bigger than the 1e-6 (the float precision).
During my application, I saw these difference can go up to 5e-3, which seems much bigger than the previous issues and affects the overall performance of my algorithm.
Everything works fine, when I do the compuation on CPU, no big difference(1e-4~1e-3) noticed
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.3
Libc version: glibc-2.35
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 3
CPU max MHz: 5000.0000
CPU min MHz: 400.0000
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.24.3 py38h14f4228_0
[conda] numpy-base 1.24.3 py38h31eccc5_0
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.12.0 py38_cu113 pytorch
| true
|
2,923,262,104
|
[ROCm] support torch._C._set_sm_carveout_experimental - Parity with Nvidia
|
OrenLeung
|
open
|
[
"module: rocm",
"triaged"
] | 10
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Hi @hliuca
On Nvidia, they support `torch._C._set_sm_carveout_experimental` for better compute-comms overlapping. this is useful during bwd pass of DDP and fwd/bwd pass of FSDP to ensure there is enough available `SM/CUs` for the rccl comms kernels to not be blocked by compute kernels that use up all the `SM/CUs`
Furthermore, it is useful towards benchmarking real world GEMMs that occurs in the backwards pass when the GEMM is unavailable to take up all the available `SM/CUs` due to rccl comms kernels occupying some of the `SM/CUs`
related to #147966
I was looking into implementing this myself but it seems like it isn't as simple as calling `hipblasLtMatmulDescSetAttribute` as it requires changes to `hipblaslt` itself since unlike `cublasLtMatmulDescSetAttribute`, `HIPBLASLT_MATMUL_DESC_CU_COUNT_TARGET` is not an option for `hipblasLtMatmulDescSetAttribute` function which takes in enum of `hipblasLtMatmulDescAttributes_t` at least according to the AMD docs
https://rocm.docs.amd.com/projects/hipBLASLt/en/latest/datatypes.html#_CPPv431hipblasLtMatmulDescAttributes_t
```cpp
computeDesc.setAttribute<int32_t>(
CUBLASLT_MATMUL_DESC_SM_COUNT_TARGET,
at::cuda::getCurrentDeviceProperties()->multiProcessorCount -
at::globalContext()._SMCarveout_EXPERIMENTAL().value());
}
```
### Versions
any rocm torch version
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,923,244,020
|
CUDA Assertion Error in Scatter Operation During Training (RTX5090 cu128)
|
bstartek
|
open
|
[
"needs reproduction",
"module: windows",
"module: cuda",
"triaged",
"module: scatter & gather ops"
] | 6
|
NONE
|
### 🐛 Describe the bug
Description:
I encountered a CUDA error: device-side assert triggered while training nnUNetv2 using PyTorch Nightly (ci128) on an RTX 5090. The error occurs in ScatterGatherKernel.cu:367, suggesting that an index is out of bounds in a scatter operation. This leads to a crash in the loss calculation.
System Information:
nnUNetv2 Version: Latest (as of submission)
PyTorch Version: Nightly (cu128)
CUDA Version: (12.8)
GPU: RTX 5090
OS: Windows 11
Python Version: 3.11
Environment: Virtualenv (PyCharm)
Error Message (Relevant Excerpt)
❌ Error during training: C:\actions-runner_work\pytorch\pytorch\pytorch\aten\src\ATen\native\cuda\ScatterGatherKernel.cu:367:
block: [4935,0,0], thread: [121,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
RuntimeError: CUDA error: device-side assert triggered
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
The error originates from the following line in dice.py:
y_onehot.scatter_(1, y.long(), 1)
Steps to Reproduce:
Train a model with nnUNetv2_train using PyTorch Nightly (ci128).
Use a dataset with multiple classes (segmentation task).
Encounter the crash during loss computation.
What I Have Tried:
Verified that target labels are within the expected range.
Checked for potential dataset preprocessing issues.
Ensured that the number of output channels in the model matches the expected number of classes.
The issue persists across multiple training runs.
Expected Behavior:
The training should run without assertion failures, ensuring that the scatter operation does not encounter out-of-bounds indices
Works on RTX3080 cu126.
Would appreciate any insights or potential fixes!
### Versions
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @ptrblck @eqy
| true
|
2,923,238,775
|
`torch.where` (`torch.nonzero(..., as_tuple=True)`) silently produces incorrect outputs with `torch.compile`
|
tohtana
|
closed
|
[
"triaged",
"oncall: pt2"
] | 2
|
NONE
|
### 🐛 Describe the bug
Hi PyTorch and `torch.compile` developers,
Thank you for your excellent work on `torch.compile`. I've been greatly enjoying its features.
I recently noticed that `torch.where` (`torch.nonzero(..., as_tuple=True)`) silently produces incorrect outputs when it is compiled with Inductor and `capture_dynamic_output_shape_ops=True`.
To reproduce:
```python
import torch
def f(x):
return torch.where(x)
x = torch.randn(2, 2048, device='cuda')
torch._dynamo.config.capture_dynamic_output_shape_ops = True
torch._dynamo.config.capture_scalar_outputs = True
compiled_f = torch.compile(f)
print(f"f(x)={f(x)}")
print(f"compiled_f(x)={compiled_f(x)}")
```
The output is
```
f(x)=(tensor([0, 0, 0, ..., 1, 1, 1], device='cuda:0'), tensor([ 0, 1, 2, ..., 2045, 2046, 2047], device='cuda:0'))
compiled_f(x)=(tensor([0, 0, 0, ..., 1, 1, 1], device='cuda:0'), tensor([0, 0, 0, ..., 1, 1, 0], device='cuda:0'))
```
When `torch._dynamo.config.capture_dynamic_output_shape_ops==False`, the results match.
```
f(x)=(tensor([0, 0, 0, ..., 1, 1, 1], device='cuda:0'), tensor([ 0, 1, 2, ..., 2045, 2046, 2047], device='cuda:0'))
compiled_f(x)=(tensor([0, 0, 0, ..., 1, 1, 1], device='cuda:0'), tensor([ 0, 1, 2, ..., 2045, 2046, 2047], device='cuda:0'))
```
This usage appears in the [Mixtral](https://github.com/huggingface/transformers/blob/fc8764c9a618add64c33e83720f974750bcd0978/src/transformers/models/mixtral/modeling_mixtral.py#L139) on HF model hub to implement MoE routing.
### Error logs
It doesn't throw an error. See above for outputs.
### Versions
```
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1082-azure-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 NVL
GPU 1: NVIDIA H100 NVL
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9V84 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 80
Socket(s): 1
Stepping: 1
BogoMIPS: 4800.06
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr rdpru arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 2.5 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 80 MiB (80 instances)
L3 cache: 320 MiB (10 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39
NUMA node1 CPU(s): 40-79
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affectedVersions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cudnn-frontend==1.5.2
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] optree==0.12.1
[pip3] pynvjitlink==0.2.3
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0+cu126
[pip3] torch-tb-profiler==0.4.3
[pip3] torch_tensorrt==2.5.0a0
[pip3] torchaudio==2.6.0+cu126
[pip3] torchmetrics==1.4.2
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @chauhang @penguinwu
| true
|
2,923,199,047
|
Fix spelling
|
TheodoreEhrenborg
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: sparse"
] | 7
|
CONTRIBUTOR
| null | true
|
2,923,177,142
|
ONNX: export failure ❌ 1.1s: Exporting the operator 'aten::fft_fft2' to ONNX opset version 17 is not supported.
|
EvelynCarter
|
closed
|
[
"module: onnx",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
(Da) D:\Create\YOLOv8-TensorRT-main>E:/anaconda3/envs/Da/python.exe d:/Create/YOLOv8-TensorRT-main/export.py
Ultralytics YOLOv8.2.50 🚀 Python-3.10.16 torch-2.2.2+cu121 CPU (Intel Core(TM) i7-9750H 2.60GHz)
YOLOv8-SOEP summary (fused): 194 layers, 3307670 parameters, 0 gradients, 11.8 GFLOPs
PyTorch: starting from 'v8doep.pt' with input shape (1, 3, 320, 320) BCHW and output shape(s) (1, 6, 2100) (6.6 MB)
ONNX: starting export with onnx 1.17.0 opset 17...
ONNX: export failure ❌ 1.1s: Exporting the operator 'aten::fft_fft2' to ONNX opset version 17 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: h
怎么解决这个问题
### Versions
Exporting the operator 'aten::fft_fft2' to ONNX opset version 18 is not supported.
Trying to convert torch model to onnx model.
How can I solve this problem?
| true
|
2,923,129,981
|
torch.onnx.export Dynamic shapes output not working
|
ducknificient
|
closed
|
[] | 3
|
NONE
|
### 🐛 Describe the bug
export not working when using `nvidia/nv-embed-v2` model, if i remove `sentence_embeddings` from the input, it would make the input static
```python
!pip install datasets==3.3.2
!pip install onnx==1.17.0 onnxscript==0.2.2 torch==2.6.0 torchvision==0.21.0 onnxruntime-gpu==1.21.0
!git clone https://github.com/ducknificient/transformers.git
!pip install /content/transformers
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
model = AutoModel.from_pretrained(
'nvidia/nv-embed-v2',
trust_remote_code=True,)
model.eval()
batch_size = 4
dummy_input_ids = torch.randint(0, 32000, (batch_size, 128)) # Batch size 2, sequence length 128
dummy_attention_mask = torch.ones((batch_size, 128), dtype=torch.int64)
dummy_pool_mask = torch.ones((batch_size, 128), dtype=torch.int64)
from torch.export import Dim
dynamic_shapes = {
"input_ids": (Dim.DYNAMIC, Dim.DYNAMIC),
"attention_mask": (Dim.DYNAMIC, Dim.DYNAMIC),
"pool_mask": (Dim.DYNAMIC, Dim.DYNAMIC),
"sentence_embeddings": (Dim.DYNAMIC, Dim.DYNAMIC),
}
# with torch.inference_mode():
torch.onnx.export(
model, # PyTorch model
# (features,), # Model inputs
(dummy_input_ids, dummy_attention_mask, dummy_pool_mask),
output_path, # Output file
export_params=True, # Store the trained weights
opset_version=14, # ONNX opset version
input_names=['input_ids', 'attention_mask','pool_mask'], # Input names
output_names=['sentence_embeddings'], # Output names
dynamic_shapes=dynamic_shapes, # Dynamic axes
dynamo=True,
verbose=True # Detailed output
)
print(f"Model exported to {output_path}")
```
this is the error
```
/usr/local/lib/python3.11/dist-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/usr/local/lib/python3.11/dist-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with `torch.export.export(..., strict=False)`... ❌
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with `torch.export.export`...
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with `torch.export.export`... ❌
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with Torch Script...
/usr/lib/python3.11/contextlib.py:105: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
W0316 15:11:47.871000 14360 torch/fx/experimental/symbolic_shapes.py:6307] failed during evaluate_expr(Eq((u0//128), 1), hint=None, size_oblivious=True, forcing_spec=False
E0316 15:11:47.872000 14360 torch/fx/experimental/recording.py:299] failed while running evaluate_expr(*(Eq((u0//128), 1), None), **{'fx_node': False, 'size_oblivious': True})
W0316 15:11:47.918000 14360 torch/fx/experimental/symbolic_shapes.py:6830] Unable to find user code corresponding to {u0}
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with Torch Script... ❌
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with internal Dynamo apis...
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with internal Dynamo apis... ❌
---------------------------------------------------------------------------
UserError Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py](https://localhost:8080/#) in __call__(self, model, args, kwargs, dynamic_shapes)
109 try:
--> 110 exported_program = self._capture(model, args, kwargs, dynamic_shapes)
111 except Exception as e:
17 frames
UserError: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['input_ids', 'attention_mask', 'pool_mask'] of `inputs`, but here they are ['input_ids', 'attention_mask', 'pool_mask', 'sentence_embeddings']. Alternatively, you could also ignore arg names entirely and specify `dynamic_shapes` as a list/tuple matching `inputs`. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
The above exception was the direct cause of the following exception:
TorchExportError Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/torch/onnx/_internal/exporter/_core.py](https://localhost:8080/#) in export(model, args, kwargs, registry, dynamic_shapes, input_names, output_names, report, verify, profile, dump_exported_program, artifacts_dir, verbose)
1290 # torch.jit.trace is due to the fallback and can be confusing to users.
1291 # We save all errors in the error report.
-> 1292 raise _errors.TorchExportError(
1293 _STEP_ONE_ERROR_MESSAGE
1294 + (
TorchExportError: Failed to export the model with torch.export. This is step 1/3 of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and summit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the *torch.export* component and attach the full error stack as well as reproduction scripts.
## Exception summary
<class 'torch._dynamo.exc.UserError'>: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['input_ids', 'attention_mask', 'pool_mask'] of `inputs`, but here they are ['input_ids', 'attention_mask', 'pool_mask', 'sentence_embeddings']. Alternatively, you could also ignore arg names entirely and specify `dynamic_shapes` as a list/tuple matching `inputs`. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
(Refer to the full stack trace above for more information.)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.48
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.1
[pip3] pynvjitlink-cu12==0.5.2
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
| true
|
2,923,122,269
|
`Floating point exception` in `torch.mkldnn_max_pool2d`
|
default1360
|
closed
|
[
"module: error checking",
"triaged",
"module: mkldnn",
"module: empty tensor",
"topic: fuzzer"
] | 3
|
NONE
|
### 🐛 Describe the bug
Running the following PyTorch code results in a `Floating point exception` crash:
```
import torch
x = torch.randn(2, 64, 32, 32).to_mkldnn()
out2 = torch.mkldnn_max_pool2d(x, kernel_size=3, stride=0)
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @malfet @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,923,088,127
|
PyTorch Appears Locked Behind a Paywall on Pydroid 3 (Possible BSD License Violation)
|
MEME-KING16
|
closed
|
[
"triage review"
] | 5
|
NONE
|
I’ve noticed something concerning regarding PyTorch usage on the Android app Pydroid 3 (available on the Google Play Store). It seems that PyTorch, despite being open-source software under the BSD 3-Clause License, is being offered only as a premium feature requiring payment within the app.
Specifically, when trying to install PyTorch within Pydroid 3, I encounter a paywall, clearly marked as “PREMIUM ONLY.” This prevents free access to PyTorch through their provided package manager, which seems to conflict with PyTorch’s BSD licensing terms. From my understanding, the BSD license permits commercial usage but requires clear attribution, the inclusion of the original license text, and generally prohibits restricting access to the software itself behind paywalls.
I’ve attached a screenshot clearly showing PyTorch labeled as a “PREMIUM ONLY” package within Pydroid 3.
Could the PyTorch team please look into this and clarify whether this usage aligns with the intended licensing? If it does not, I’d appreciate if you could address this issue or provide further guidance.
Thanks for your time

!
| true
|
2,922,947,159
|
Make Subset dataset a true wrapper
|
adamjstewart
|
open
|
[
"triaged",
"open source",
"release notes: dataloader"
] | 4
|
CONTRIBUTOR
|
Imagine you have a custom `FooDataset` class like so:
```python
from torch.utils.data import Dataset, random_split
class FooDataset(Dataset):
url = '...'
def plot(self, x):
...
```
You want to make train/val/test splits of this dataset like so:
```python
dataset = FooDataset(...)
train_dataset, val_dataset, test_dataset = random_split(dataset, [0.6, 0.2, 0.2])
```
One would _expect_ that `train_dataset` has all of the same attributes and methods as `dataset`. However, it doesn't, you have to use the unintuitive `train_dataset.dataset` to access these attributes.
This PR turns `Subset` into a true wrapper class, such that all undefined attributes/methods automatically redirect to the `dataset` attribute. One can now access `train_dataset.url` or `train_dataset.plot(x)` as they would with the `dataset` object.
### References
* Inspired by: https://stackoverflow.com/questions/68926132/creation-of-a-class-wrapper-in-python
| true
|
2,922,940,517
|
Fix unexpected keyword argument 'mode' when calling `CompileCounterWithBackend`
|
GdoongMathew
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo"
] | 20
|
CONTRIBUTOR
|
Fixes #149209
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,922,934,377
|
Add default XPU toolkit path to CMake
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: xpu"
] | 12
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149270
# Motivation
Add default XPU runtime path to CMake to mitigate https://github.com/pytorch/pytorch/issues/149075
This ensures proper linking with `libtorch` when a user does not source the Torch XPU toolkit while working on a C++ library or executable.
| true
|
2,922,708,295
|
Add Optimizer: Sharpness Aware Minimization
|
divyansha
|
open
|
[
"module: optimizer",
"triaged"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
[Sharpness Aware Minimization](https://arxiv.org/abs/2010.01412) is an optimizer that has been shown to improve robustness and model generalization. While open source implementations are available, SAM has not been added to the PyTorch library yet.
It would be great if we added SAM to the PyTorch library of optimizers.
I would love to take this up if there is interest in having SAM added to PyTorch.
### Alternatives
_No response_
### Additional context
[SAM Open Source Implementation](https://github.com/davda54/sam)
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,922,707,327
|
Fix mps scaled dot attention
|
rakshekaraj
|
open
|
[
"module: cpu",
"triaged",
"open source",
"module: amp (automated mixed precision)",
"release notes: quantization",
"release notes: mps"
] | 5
|
NONE
|
Fixes #149261
Previously, computations on large sequence lengths (seq_len > 12288) were failing due to excessive memory usage.
My solution:
Implemented Chunking for Memory-Efficient Computation:
-Instead of processing the full tensor in one step, we split operations into chunks (chunk_size = 4096).This avoids memory spikes.
-No compromise on FP32 accuracy, unlike previous FP16-based workarounds.
Updated scaled_dot_product_attention Logic
-Loop-based computation replaces single-step matrix multiplication.
-The output tensor is built incrementally to stay within memory limits.
Also tested and verified. Through chunking, I was successfully able to run seq_len=16384 without memory errors. Although chunking introduces a small computational overhead compared to running the full operation in one step, it prevents memory allocation failures
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel
| true
|
2,922,674,805
|
[REPLACED WITH SPLITTED STACK] Cache kernel code generation in TritonTemplate.generate()
|
laithsakka
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149267
In a model, we see ~~ 40% of the time in mm/addmm tuning. The model have 2000 mm,
many of which receives the same input shapes.
with autotune enabled, this become expensive, while we already cache auto tuning results, we
did not used to cache the generation of the python code and the loading for each config that we autotune on.
This diff handles the code generation part (template expansions) a previous diff handled the loading part.
This is expected to save 20% of the model I am working on.
How do we do the caching?
For a given configurations and input layout, the generated code is always the same. One caveat is that
some other information collected during code generation are input dependent (namely depends on inputs
names and symbol names in inputs). and not just layout. !
To handle those we use a record and replay approach, where we record the functions that are called during
code generation that effect those outputs and replay them at a cache hit.
effect on mm_loop benchmark :
26262641850 ->20890757277
33527024884- > 28316151238
another win will come on the next PR on top of this, when we avoid replaying load_input.
by changing the way kernel.prologue_supported_inputs are computed.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,922,670,312
|
[Pushed as separate PR on different stack] cache kernel codegen and loading
|
laithsakka
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149266
* #148899
* #148893
* #148872
* #148742
* #148815
* #148809
* #148430
| true
|
2,922,652,833
|
[MPS/metal] Add missing `inline` to function definitions.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
MEMBER
| null | true
|
2,922,650,450
|
[MPS] Add support for modified_bessel_i0 in eager.
|
dcci
|
closed
|
[
"Merged",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 5
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,922,571,965
|
[MPS][BE] Move common binary ops macros to indexing.h
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149263
* #149262
And binary op invocation logic to OperationUtils.mm
This is a no-op change, additional sanity checks/logic improvements will be added as followups
| true
|
2,922,555,350
|
[EZ][BE] Reuse `result_of` from `c10/metal/utils.h`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149263
* __->__ #149262
No need for one more implementation
| true
|
2,922,480,434
|
MPS Error: NDArray > 2^32 bytes in scaled_dot_product_attention
|
HFDLYS
|
open
|
[
"triaged",
"module: 64-bit",
"module: mps",
"module: sdpa"
] | 0
|
NONE
|
### 🐛 Describe the bug
An error, `/AppleInternal/Library/BuildRoots/d187755d-b9a3-11ef-83e5-aabfac210453/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:850: failed assertion [MPSNDArray initWithDevice:descriptor:isTextureBacked:] Error: total bytes of NDArray > 2**32'`, appears to occur in `out = torch.nn.functional.scaled_dot_product_attention(q, k, v)` when processing large tensors. The following is the minimal, reproducible code:
```python
import torch
device = torch.device("mps")
q = torch.randn(1, 12, 29640, 128).to(device)
k = torch.randn(1, 12, 29640, 128).to(device)
v = torch.randn(1, 12, 29640, 128).to(device)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v)
print(out.shape)
```
### Versions
PyTorch version: 2.8.0.dev20250315
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.31.6
Libc version: N/A
Python version: 3.10.16 (main, Dec 3 2024, 17:27:57) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Max
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] torch==2.8.0.dev20250315
[pip3] torchaudio==2.6.0.dev20250315
[pip3] torchvision==0.22.0.dev20250315
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,922,271,830
|
ERROR: Could not find a version that satisfies the requirement torch==2.5.1
|
TheNotary
|
closed
|
[] | 4
|
NONE
|
### 🐛 Describe the bug
I'm having trouble installing recent versions of pytorch using pyenv on mac. It's seems my pip isn't able to see any versions past 2.2.2.
```
# My usual commands to setup a virtualenv
$ python -m venv .venv
$ source .venv/bin/activate
$ pip cache purge
$ pip install --upgrade pip
$ which pip
/private/tmp/test_py/halp/.venv/bin/pip
$ pip install --index-url https://pypi.org/simple torch==2.5.1
ERROR: Could not find a version that satisfies the requirement torch==2.5.1 (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2)
ERROR: No matching distribution found for torch==2.5.1
```
I removed all envs that contained the string PYTHON from my env and still am stuck here.
```
$ env |grep PYTHON
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (x86_64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.5
Libc version: N/A
Python version: 3.9.20 (main, Mar 15 2025, 00:40:13) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-15.3.2-x86_64-i386-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
| true
|
2,922,250,508
|
Fix memory leak in subproc_pool future
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149259
Summary: The future holds a reference to the callback, and the callback captures the outer future. Seems to create a cycle that the garbage collector doesn't clean up. Verified by compiling 15k synthetic Triton kernels and observing that subprocess memory overhead improves.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,922,232,536
|
Auto-selective activation checkpointing is not optimal for speed (issue with min_cut_rematerialization_partition)
|
efsotr
|
open
|
[
"triaged",
"oncall: pt2",
"module: pt2-dispatcher"
] | 10
|
NONE
|
### 🐛 Describe the bug
I try the new api described in [pytorch blog: selective activation checkpointing](https://pytorch.org/blog/activation-checkpointing-techniques/#compile-only-memory-budget-api-new)
.
Then I find that selective activation checkpointing is not optimal for speed.
A minimal reproducer:
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
os.environ["TORCH_COMPILE_DEBUG"] = "1"
import torch
from torch import nn
import torch.nn.functional as F
import torch._functorch.config
torch._functorch.config.activation_memory_budget = 0.99
class Test1(nn.Module):
def __init__(self):
super().__init__()
self.layer0 = nn.Linear(100, 100, bias=False)
self.layer1 = nn.Linear(100, 100, bias=False)
self.norm = nn.RMSNorm(100)
def forward(self, x):
x = self.norm(x)
return self.layer0(self.layer1(x))
class Test(nn.Module):
def __init__(self):
super().__init__()
self.embs = nn.Embedding(1000, 100)
self.layers = nn.ModuleList([Test1() for _ in range(32)])
def forward(self, x):
x = self.embs(x)
for layer in self.layers:
x = layer(x) + x
return x.sum()
x = torch.randint(0, 1000, (20,), device="cuda")
model = Test().cuda().bfloat16()
compiled_model = torch.compile(model)
y = compiled_model(x)
y.backward()
```
In the `torch_compile_debug` backward folder, the `fx_graph_readable.py` file shows an unusual series of additions.
```python
class GraphModule(torch.nn.Module):
def forward(self, primals_2: "i64[20]", primals_3: "bf16[100]", primals_6: "bf16[100]", primals_9: "bf16[100]", primals_12: "bf16[100]", primals_15: "bf16[100]", primals_18: "bf16[100]", primals_21: "bf16[100]", primals_24: "bf16[100]", primals_27: "bf16[100]", primals_30: "bf16[100]", primals_33: "bf16[100]", primals_36: "bf16[100]", primals_39: "bf16[100]", primals_42: "bf16[100]", primals_45: "bf16[100]", primals_48: "bf16[100]", primals_51: "bf16[100]", primals_54: "bf16[100]", primals_57: "bf16[100]", primals_60: "bf16[100]", primals_63: "bf16[100]", primals_66: "bf16[100]", primals_69: "bf16[100]", primals_72: "bf16[100]", primals_75: "bf16[100]", primals_78: "bf16[100]", primals_81: "bf16[100]", primals_84: "bf16[100]", primals_87: "bf16[100]", primals_90: "bf16[100]", primals_93: "bf16[100]", primals_96: "bf16[100]", embedding: "bf16[20, 100]", rsqrt: "bf16[20, 1]", mm: "bf16[20, 100]", mm_1: "bf16[20, 100]", rsqrt_1: "bf16[20, 1]", mm_2: "bf16[20, 100]", mm_3: "bf16[20, 100]", rsqrt_2: "bf16[20, 1]", mm_4: "bf16[20, 100]", mm_5: "bf16[20, 100]", rsqrt_3: "bf16[20, 1]", mm_6: "bf16[20, 100]", mm_7: "bf16[20, 100]", rsqrt_4: "bf16[20, 1]", mm_8: "bf16[20, 100]", mm_9: "bf16[20, 100]", rsqrt_5: "bf16[20, 1]", mm_10: "bf16[20, 100]", mm_11: "bf16[20, 100]", rsqrt_6: "bf16[20, 1]", mm_12: "bf16[20, 100]", mm_13: "bf16[20, 100]", rsqrt_7: "bf16[20, 1]", mm_14: "bf16[20, 100]", mm_15: "bf16[20, 100]", rsqrt_8: "bf16[20, 1]", mm_16: "bf16[20, 100]", mm_17: "bf16[20, 100]", rsqrt_9: "bf16[20, 1]", mm_18: "bf16[20, 100]", mm_19: "bf16[20, 100]", rsqrt_10: "bf16[20, 1]", mm_20: "bf16[20, 100]", mm_21: "bf16[20, 100]", rsqrt_11: "bf16[20, 1]", mm_22: "bf16[20, 100]", mm_23: "bf16[20, 100]", rsqrt_12: "bf16[20, 1]", mm_24: "bf16[20, 100]", mm_25: "bf16[20, 100]", rsqrt_13: "bf16[20, 1]", mm_26: "bf16[20, 100]", mm_27: "bf16[20, 100]", rsqrt_14: "bf16[20, 1]", mm_28: "bf16[20, 100]", mm_29: "bf16[20, 100]", rsqrt_15: "bf16[20, 1]", mm_30: "bf16[20, 100]", mm_31: "bf16[20, 100]", rsqrt_16: "bf16[20, 1]", mm_32: "bf16[20, 100]", mm_33: "bf16[20, 100]", rsqrt_17: "bf16[20, 1]", mm_34: "bf16[20, 100]", mm_35: "bf16[20, 100]", rsqrt_18: "bf16[20, 1]", mm_36: "bf16[20, 100]", mm_37: "bf16[20, 100]", rsqrt_19: "bf16[20, 1]", mm_38: "bf16[20, 100]", mm_39: "bf16[20, 100]", rsqrt_20: "bf16[20, 1]", mm_40: "bf16[20, 100]", mm_41: "bf16[20, 100]", rsqrt_21: "bf16[20, 1]", mm_42: "bf16[20, 100]", mm_43: "bf16[20, 100]", rsqrt_22: "bf16[20, 1]", mm_44: "bf16[20, 100]", mm_45: "bf16[20, 100]", rsqrt_23: "bf16[20, 1]", mm_46: "bf16[20, 100]", mm_47: "bf16[20, 100]", rsqrt_24: "bf16[20, 1]", mm_48: "bf16[20, 100]", mm_49: "bf16[20, 100]", rsqrt_25: "bf16[20, 1]", mm_50: "bf16[20, 100]", mm_51: "bf16[20, 100]", rsqrt_26: "bf16[20, 1]", mm_52: "bf16[20, 100]", mm_53: "bf16[20, 100]", rsqrt_27: "bf16[20, 1]", mm_54: "bf16[20, 100]", mm_55: "bf16[20, 100]", rsqrt_28: "bf16[20, 1]", mm_56: "bf16[20, 100]", mm_57: "bf16[20, 100]", rsqrt_29: "bf16[20, 1]", mm_58: "bf16[20, 100]", mm_59: "bf16[20, 100]", rsqrt_30: "bf16[20, 1]", mm_60: "bf16[20, 100]", mm_61: "bf16[20, 100]", rsqrt_31: "bf16[20, 1]", mm_62: "bf16[20, 100]", permute_66: "bf16[100, 100]", permute_70: "bf16[100, 100]", permute_74: "bf16[100, 100]", permute_78: "bf16[100, 100]", permute_82: "bf16[100, 100]", permute_86: "bf16[100, 100]", permute_90: "bf16[100, 100]", permute_94: "bf16[100, 100]", permute_98: "bf16[100, 100]", permute_102: "bf16[100, 100]", permute_106: "bf16[100, 100]", permute_110: "bf16[100, 100]", permute_114: "bf16[100, 100]", permute_118: "bf16[100, 100]", permute_122: "bf16[100, 100]", permute_126: "bf16[100, 100]", permute_130: "bf16[100, 100]", permute_134: "bf16[100, 100]", permute_138: "bf16[100, 100]", permute_142: "bf16[100, 100]", permute_146: "bf16[100, 100]", permute_150: "bf16[100, 100]", permute_154: "bf16[100, 100]", permute_158: "bf16[100, 100]", permute_162: "bf16[100, 100]", permute_166: "bf16[100, 100]", permute_170: "bf16[100, 100]", permute_174: "bf16[100, 100]", permute_178: "bf16[100, 100]", permute_182: "bf16[100, 100]", permute_186: "bf16[100, 100]", permute_190: "bf16[100, 100]", permute_194: "bf16[100, 100]", permute_198: "bf16[100, 100]", permute_202: "bf16[100, 100]", permute_206: "bf16[100, 100]", permute_210: "bf16[100, 100]", permute_214: "bf16[100, 100]", permute_218: "bf16[100, 100]", permute_222: "bf16[100, 100]", permute_226: "bf16[100, 100]", permute_230: "bf16[100, 100]", permute_234: "bf16[100, 100]", permute_238: "bf16[100, 100]", permute_242: "bf16[100, 100]", permute_246: "bf16[100, 100]", permute_250: "bf16[100, 100]", permute_254: "bf16[100, 100]", permute_258: "bf16[100, 100]", permute_262: "bf16[100, 100]", permute_266: "bf16[100, 100]", permute_270: "bf16[100, 100]", permute_274: "bf16[100, 100]", permute_278: "bf16[100, 100]", permute_282: "bf16[100, 100]", permute_286: "bf16[100, 100]", permute_290: "bf16[100, 100]", permute_294: "bf16[100, 100]", permute_298: "bf16[100, 100]", permute_302: "bf16[100, 100]", permute_306: "bf16[100, 100]", permute_310: "bf16[100, 100]", permute_314: "bf16[100, 100]", permute_318: "bf16[100, 100]", tangents_1: "bf16[]"):
# File: /tmp/ipykernel_1043308/3460069279.py:38 in forward, code: return x.sum()
expand: "bf16[20, 100]" = torch.ops.aten.expand.default(tangents_1, [20, 100]); tangents_1 = None
# File: /tmp/ipykernel_1043308/3460069279.py:24 in forward, code: return self.layer0(self.layer1(x))
permute_64: "bf16[100, 20]" = torch.ops.aten.permute.default(expand, [1, 0])
mm_64: "bf16[100, 100]" = torch.ops.aten.mm.default(permute_64, mm_62); permute_64 = mm_62 = None
permute_65: "bf16[100, 100]" = torch.ops.aten.permute.default(mm_64, [1, 0]); mm_64 = None
mm_65: "bf16[20, 100]" = torch.ops.aten.mm.default(expand, permute_66); permute_66 = None
permute_67: "bf16[100, 100]" = torch.ops.aten.permute.default(permute_65, [1, 0]); permute_65 = None
permute_68: "bf16[100, 20]" = torch.ops.aten.permute.default(mm_65, [1, 0])
# File: /tmp/ipykernel_1043308/3460069279.py:37 in forward, code: x = layer(x) + x
add_1: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_1, embedding); mm_1 = None
add_3: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_3, add_1); mm_3 = None
add_5: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_5, add_3); mm_5 = None
add_7: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_7, add_5); mm_7 = None
add_9: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_9, add_7); mm_9 = None
add_11: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_11, add_9); mm_11 = None
add_13: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_13, add_11); mm_13 = None
add_15: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_15, add_13); mm_15 = None
add_17: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_17, add_15); mm_17 = None
add_19: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_19, add_17); mm_19 = None
add_21: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_21, add_19); mm_21 = None
add_23: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_23, add_21); mm_23 = None
add_25: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_25, add_23); mm_25 = None
add_27: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_27, add_25); mm_27 = None
add_29: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_29, add_27); mm_29 = None
add_31: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_31, add_29); mm_31 = None
add_33: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_33, add_31); mm_33 = None
add_35: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_35, add_33); mm_35 = None
add_37: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_37, add_35); mm_37 = None
add_39: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_39, add_37); mm_39 = None
add_41: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_41, add_39); mm_41 = None
add_43: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_43, add_41); mm_43 = None
add_45: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_45, add_43); mm_45 = None
add_47: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_47, add_45); mm_47 = None
add_49: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_49, add_47); mm_49 = None
add_51: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_51, add_49); mm_51 = None
add_53: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_53, add_51); mm_53 = None
add_55: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_55, add_53); mm_55 = None
add_57: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_57, add_55); mm_57 = None
add_59: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_59, add_57); mm_59 = None
add_61: "bf16[20, 100]" = torch.ops.aten.add.Tensor(mm_61, add_59); mm_61 = None
```
A simple observation reveals that it has transformed into the following pattern during the forward pass:
x1 = x0 + y0,
x2 = x1 + y1,
x3 = x2 + y2.
Here, x0, x1, x2, and x3 are all stored for backward computation.
The optimal approach would be to store x0, x1, x2, and x3.
However, due to an issue in the `min cut` implementation of `torch.compile`, which supports recomputation for non-compute-intensive operations, it instead stores x0, y0, y1, and y2, while x1, x2, and x3 are recomputed.
Although both methods use the same amount of memory, the latter introduces unnecessary computations.
### Error logs
_No response_
### Versions
torch 2.5.1+cu124
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,922,224,705
|
[BE]: Apply ruff PERF403 to use dict comprehensions more often
|
Skylion007
|
closed
|
[
"oncall: distributed",
"open source",
"better-engineering",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: quantization",
"release notes: releng",
"fx",
"module: inductor",
"ciflow/inductor",
"release notes: distributed (checkpoint)",
"ci-no-td"
] | 8
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,922,204,114
|
Broken link
|
felipemello1
|
closed
|
[
"module: docs",
"triaged"
] | 1
|
NONE
|
### 📚 The doc issue
https://pytorch.org/docs/stable/distributed.checkpoint.html
<img width="515" alt="Image" src="https://github.com/user-attachments/assets/5a59211d-c138-4bc2-9fc8-f020d11b15b9" />
I think that the torchtitan link should be: https://github.com/pytorch/torchtitan/blob/main/torchtitan/components/checkpoint.py
### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
2,922,188,791
|
second-half
|
rec
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149255
* #149210
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,922,167,909
|
[BE][Ez]: Update CU126 to CUDNN 12.8 too
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"Reverted",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 24
|
COLLABORATOR
|
Have CUDNN have the same version for 12.6 and 12.8 for better performance and consistency. We can't do CU12.1 because it's not supported and CU12.4 isn't updated due to manywheel Linux compatibility reasons and dropping support for it.
| true
|
2,921,983,884
|
Add batch dim sharding rule to sdpa
|
fmassa
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
MEMBER
|
This is a trivial rule that for most cases isn't needed, but if we want to consider that the input data is actually `Shard(0)` (instead of `Replicated()` as it is currently assumed), then we need this rule.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,921,917,669
|
INTERNAL ASSERT FAILED in `torch.autograd`
|
x0w3n
|
closed
|
[
"module: autograd",
"triaged",
"module: assert failure",
"actionable"
] | 1
|
NONE
|
### 🐛 Describe the bug
torch.autograd raises a INTERNAL ASSERT FAILED accompanied by the message: "please report a bug to PyTorch."
minimal example:
```python
import torch
class CustomRepeatInterleave(torch.autograd.Function):
@staticmethod
def forward(ctx, input, repeats):
ctx.repeats = repeats
output = input.repeat_interleave(repeats)
ctx.mark_dirty(output)
return output
@staticmethod
def backward(ctx, grad_output):
repeats = ctx.repeats
grad_input = torch.zeros_like(ctx.saved_tensors[0])
for i in range(repeats):
grad_input += grad_output[i] # Fixed the closing parenthesis here
return grad_input, None
# Example usage
x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
repeats = 2
y = CustomRepeatInterleave.apply(x, repeats)
z = y.sum()
z.backward()
print(x.grad)
```
Output
```
RuntimeError Traceback (most recent call last)
[<ipython-input-2-2ab64fef08a5>](https://localhost:8080/#) in <cell line: 0>()
20 x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
21 repeats = 2
---> 22 y = CustomRepeatInterleave.apply(x, repeats)
23
24 z = y.sum()
[/usr/local/lib/python3.11/dist-packages/torch/autograd/function.py](https://localhost:8080/#) in apply(cls, *args, **kwargs)
573 # See NOTE: [functorch vjp and autograd interaction]
574 args = _functorch.utils.unwrap_dead_wrappers(args)
--> 575 return super().apply(*args, **kwargs) # type: ignore[misc]
576
577 if not is_setup_ctx_defined:
RuntimeError: creation_meta == CreationMeta::DEFAULT INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/autograd/variable.cpp":224, please report a bug to PyTorch.
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,921,910,989
|
false INTERNAL ASSERT FAILED in `torch.jit.trace`
|
x0w3n
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
torch.jit.trace raises a false INTERNAL ASSERT FAILED when trace the model, accompanied by the message: "please report a bug to PyTorch."
minimal example:
```python
import torch
import torch.nn as nn
import torch.nn.parameter as parameter
# Step 1: Define the PyTorch function
class StatefulModel(nn.Module):
def __init__(self):
super(StatefulModel, self).__init__()
self.param = parameter.Parameter(torch.tensor(0.0))
def forward(self, x):
self.param.data += x
return self.param
# Step 2: Create an instance of the model
model = StatefulModel()
# Step 3: Attempt to serialize and deserialize the model (simulating deployment)
import torch.jit as jit
# Trace the model
traced_model = jit.trace(model, (torch.tensor(1.0),))
# Save and load the traced model
model_script = jit.script(model)
model_script.save('stateful_model.pt')
model_script = jit.script(model).load('stateful_model.pt')
# Step 4: Test the loaded model
input_tensor = torch.tensor(2.0)
output = model_script(input_tensor)
print(output)
```
Output
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-1-ab84b4271799>](https://localhost:8080/#) in <cell line: 0>()
20
21 # Trace the model
---> 22 traced_model = jit.trace(model, (torch.tensor(1.0),))
23
24 # Save and load the traced model
2 frames
[/usr/local/lib/python3.11/dist-packages/torch/jit/_trace.py](https://localhost:8080/#) in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_kwarg_inputs, _store_inputs)
998 )
999
-> 1000 traced_func = _trace_impl(
1001 func,
1002 example_inputs,
[/usr/local/lib/python3.11/dist-packages/torch/jit/_trace.py](https://localhost:8080/#) in _trace_impl(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_kwarg_inputs, _store_inputs)
694 else:
695 raise RuntimeError("example_kwarg_inputs should be a dict")
--> 696 return trace_module(
697 func,
698 {"forward": example_inputs},
[/usr/local/lib/python3.11/dist-packages/torch/jit/_trace.py](https://localhost:8080/#) in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit, example_inputs_is_kwarg, _store_inputs)
1274 else:
1275 example_inputs = make_tuple(example_inputs)
-> 1276 module._c._create_method_from_trace(
1277 method_name,
1278 func,
RuntimeError: outputs_[i]->uses().empty() INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/ir.cpp":1307, please report a bug to PyTorch.
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.13 (main, Oct 13 2022, 21:15:33) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) GOLD 6530
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 320 MiB (2 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-15,64-79
NUMA node1 CPU(s): 16-31,80-95
NUMA node2 CPU(s): 32-47,96-111
NUMA node3 CPU(s): 48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,921,887,170
|
Implement batching for torch.isin operator
|
nikhilgv9
|
open
|
[
"triaged",
"module: vmap",
"module: functorch"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
Received the following error while invoking vmap() on a function making use of `torch.inis` operator.
```
...Temp/ipykernel_20808/3722652185.py#line=49)
: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::isin.Tensor_Tensor.
Please file us an issue on GitHub so that we can prioritize its implementation.
(Triggered internally at [C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\functorch\BatchedFallback.cpp:84]
(file:///C:/actions-runner/_work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/BatchedFallback.cpp#line=83).)
return torch.where(torch.isin(uniqs, doublesT), 0, 1)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,921,849,715
|
[AOTInductor] [BE] Add macro for loading symbols in aoti runner
|
muchulee8
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149161
* __->__ #149249
Summary:
Add macro for loading symbols in aoti runner
Test Plan:
Existing tests
Reviewers:
Subscribers:
Tasks:
Tags:
| true
|
2,921,830,948
|
Fix multiprocessing with CUDA_VISIBLE_DEVICES seems to give the wrong device
|
fzyzcjy
|
open
|
[
"triaged",
"open source",
"release notes: distributed (miscellaneous)"
] | 3
|
CONTRIBUTOR
|
Fixes #149196
This is merely a proof-of-concept PR. I would like to hear a bit of feedback - is the direction acceptable - before working on it deeper.
Things that will be added if the direction of PR looks acceptable: Unit tests, caches, implement-in-C++ (to speedup), etc.
| true
|
2,921,740,810
|
[Async TP] More robust support for rowwise scales when fusing matmul reduce-scatter
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"Merged",
"release notes: distributed (pipeline)",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Part of https://github.com/pytorch/torchtitan/issues/866
## Context
- Async TP needs to support the "reshape -> scaled_mm -> reshape" pattern because scaled mm only supports 2D input tensors and 2D scales.
- (a,b,c) => (a*b,c)
- (a\*b,c) @ (c,d) = (a\*b,d)
- (a\*b,d) => (a,b,d)
- Currently the implementation does not support scaled mm with rowwise scales **for all cases** of the reshape -> scaled_mm -> reshape pattern. The minimal example of this pattern is confirmed to work via this [unit test](https://github.com/pytorch/pytorch/blob/00a2c68f67adbd38847845016fd1ab9275cefbab/test/distributed/tensor/parallel/test_micro_pipeline_tp.py#L406), but more involved e2e examples in torchtitan fail silently (more context in final bullet point).
- Previously, the "A tensor" **node** referenced in the async TP graph manipulation code is the 3D+ node before the reshape, but the "A_scale" node is the 2d node from after the reshape, so they are incompatible.
- I previously implemented a simpler solution to this problem in https://github.com/pytorch/pytorch/pull/148001, with a [unit test](https://github.com/pytorch/pytorch/pull/148001/files#diff-115f1d0852382c9b58f22640d80999d879b33618e5f6c633fc9e4d0ca9781cecR406) confirming the fused node is indeed in the graph for the minimal example of the reshape->mm->reshape pattern. I also confirmed via manual e2e testing w/ torchtitan that the crash I was fixing no longer occurred. However, it turns out due to this [bug in torchtitan](https://github.com/pytorch/torchtitan/issues/866) it was causing async TP to fail silently and fall back to vanilla TP, hiding the fact that this original solution fixed the crash but the fusion would not occur for rowwise scales. Thus, more robust solution is needed to support all cases.
## Solution TL;DR
- Use the 2D 'A' tensor and corresponding 2D scales as input to the fused_matmul_reduce_scatter implementation, instead of the 3D+ tensor/scales.
- Track the "pre mm reshape" and "post mm reshape" separately, to be referenced in the `fused_scaled_matmul_reduce_scatter` implementation, to update the scatter dim through the pre-mm reshape, and apply the post-mm reshape before applying the reduce scatter and returning the output tensor.
- Separate the `fused_matmul_reduce_scatter` and the `fused_scaled_matmul_reduce_scatter` code paths, to simplify them both.
- By fixing the bug in torchtitan (PR https://github.com/pytorch/torchtitan/pull/965) and implementing support for rowwise scales in pytorch in this PR, together these changes will solve the problem of how to support rowwise scales with all types of AC.
## Additional details for reviewers
To use the 2D A tensor while also supporting the "reshape -> mm -> reshape" pattern, the following other changes were needed:
- Track the pre-mm reshape, as it will affect the scatter dim used in the fused_matmul_reduce_scatter impementation.
- Track the post-mm reshape, as it will affect the output shape used in the fused_matmul_reduce_scatter impementation
- Based on the pre-mm reshape and the original scatter dim, calculate the new scatter dim for the 2D tensor. This is needed because during the pipelined producer mm implementation, the scatter dim is moved to dim 0 (so it can be sharded along the first dim and then get chunks to do mm ops on by indexing into the first dim), then moved back to it's original place before the reduce-scatter.
- Use the tracked post-mm reshape to reshape the stacked partial 2D outputs of the mm ops into 3D outputs needed for 1) the reduce-scatter w/ the original scatter dim, and 2) the expected output shape to prevent shape errors with subsequent ops.
## Test plan
- All existing unit tests passing.
- Expand unit tests for rowwise scales to test more scatter dims
- Added unit tests enforcing that async TP fails fast / throws an error if it fails to perform any fusions. Previously it just "failed silently" (fell back to vanilla TP without the user knowing) which has led to confusion, so this will improve the UX.
- Compared loss curves of bf16 vs float8 w/ rowwise scales to confirm integrity of numerics
- Confirmed via manual testing with torchtitan and inspecting the compile graph that the fusion is working as intended for:
- bfloat16
- float8 with tensorwise scales
- float8 with rowwise scales
## Loss curves
Loss curves are virtually identical for bf16 + vanilla TP versus float8 with rowwise scales + async TP:
<img width="1017" alt="loss_async_tp" src="https://github.com/user-attachments/assets/4995db78-7012-490f-a370-f4fecc289a22" />
## Performance
#### Per op SAC
Performance benchmarks for torchtitan Llama3 8b training runs on 4 H100s with per op SAC, using FSDP degree=2, TP degree=2:
- bf16 (vanilla TP): TPS 5161.5, peak memory 50.53 GB
- bf16 (async TP): TPS 5229.5, peak memory 50.68 GB
- float8 tensorwise (vanilla TP): TPS: 5959.5, peak memory: 50.47 GB
- float8 tensorwise (async TP): TPS 5964.5, peak memory 50.47 GB
- float8 rowwise (vanilla TP): TPS: 4962.0, peak memory: 50.55 GB
- float8 rowwise (async TP): TPS 4966.5, peak memory 50.65 GB
#### Full AC
Llama3 70b training runs on 128 H100s with full AC, using FSDP=16, TP=8
- bf16 (vanilla TP): 598 TPS, peak memory 71.51 GB
- bf16 (async TP): TPS 673, peak memory 71.08 (+12.54% TPS vs vanilla TP)
- float8 tensorwise (vanilla TP): 820 TPS, peak memory 55.26 GB
- float8 tensorwise (async TP): 950 TPS, peak memory 55.91 GB (+15.85% TPS vs vanilla TP)
- float8 rowwise (vanilla TP): TPS: 540 TPS, peak memory 71.46 GB
- float8 rowwise (async TP): 560 TPS, peak memory 70.65 GB (+3.7% TPS vs vanilla TP but still unexpectedly lower than bf16)
As you can see, float8 rowwise is working but performance needs to be improved further.
## Other changes
- Added logging so the user will know why fusion failed if it does.
- Remove logic which inserted a reshape node targeting "A scale" to get it to be in 3D like the "A tensor" since it's no longer needed.
## Long term plan
- Add a `scaled_matmul` op in pytorch, which will natively support a 3D+ "A tensor" and allow us to simplify the async TP implementation by avoiding the reshape -> scaled_mm -> reshape pattern and the special handling for it.
## Visualizing fused nodes in graphs for torchtitan training runs
Below are examples of the visualized graph generated by torch compile for torchtitan llama3 8b training runs with per op SAC. These graphs provide additional evidence (beyond the new unit tests added) that the implementation is working correctly.
### bf16
<img width="900" alt="bf16-fusion" src="https://github.com/user-attachments/assets/a3bed917-28eb-4a56-8d6e-2d2bf498385c" />
### float8 with tensorwise scales
<img width="900" alt="tensorwise-node" src="https://github.com/user-attachments/assets/b212ec4a-1899-44de-a4de-18c74e1de68a" />
### float8 with rowwise scales
<img width="900" alt="rowwise" src="https://github.com/user-attachments/assets/ed3354a3-894b-4ec9-86d0-f80364bf3d83" />
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,921,641,615
|
Not generate custom obj json when it's empty
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary: as title.
See internal Diff summary for more context.
Test Plan: buck run @fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r config_not_generated
Differential Revision: D71241676
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,921,608,816
|
[ROCm] Fixes and improvements to CUDA->HIP flag conversion for CPP extensions
|
naromero77amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 13
|
COLLABORATOR
|
Fixes https://github.com/ROCm/hip/issues/3764.
Fixes and improvements to CUDA->HIP flag conversion for CPP extensions
- Log flag conversion for debugging purposes.
- Fix cases where it should not touch the -I flags or cases where CUDA appears more than once by replacing only the first instance.
- Fix case where nvcc key may not exist
- Fix case where hipify should ignore flag values and only touch the flag itself
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,921,597,904
|
torch.sum_of_squares()
|
ad8e
|
open
|
[
"triaged",
"module: python frontend"
] | 6
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
It would be equivalent to `torch.linalg.vector_norm(..., ord=2) ** 2`.
In my codebase, 14/18 of the norm calls have `** 2` immediately after. So the sum of squares is more common than the vector norm.
This is slightly faster because `vector_norm` calls `sqrt`, which is extraneous if we undo it afterward.
### Alternatives
`vector_norm(...) ** 2` is just one extra kernel call.
### Additional context
If you care about abstract theory, this is because the sum of squares is more natural than the vector norm. They both measure the size of something, and size is linear in the square domain. Examples: reduction of grad norm across devices works with squares but not norms; variance is additive while standard deviation is not; the Pythagorean theorem adds squares of terms.
cc @albanD
| true
|
2,921,565,160
|
Fix AOTI update_constant_buffer issue.
|
henryoier
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Summary:
In D69553929 we changed the logic of constant & buffer update in AOTI. However this is incompatible with current Sigmoid runtime since we have different logics to pass in buffers, resulted in errors like
```
I0310 17:29:24.456960 3679102 AOTIDelegateExecutor.cpp:89] AOTIDelegateExecutor processing weights
*** Aborted at 1741652964 (Unix time, try 'date -d 1741652964') ***
*** Signal 11 (SIGSEGV) (0x30) received by PID 3679102 (pthread TID 0x7f9933e49000) (linux TID 3679102) (code: address not mapped to object), stack trace: ***
@ 00000000000040b9 folly::symbolizer::(anonymous namespace)::signalHandler(int, siginfo_t*, void*)
./fbcode/folly/debugging/symbolizer/SignalHandler.cpp:453
@ 0000000000006c45 folly::fibers::(anonymous namespace)::sigsegvSignalHandler(int, siginfo_t*, void*)
./fbcode/folly/fibers/GuardPageAllocator.cpp:237
@ 000000000004455f (unknown)
/home/engshare/third-party2/glibc/2.34/src/glibc-2.34/signal/../sysdeps/unix/sysv/linux/libc_sigaction.c:8
-> /home/engshare/third-party2/glibc/2.34/src/glibc-2.34/signal/../sysdeps/unix/sysv/linux/x86_64/libc_sigaction.c
@ 00000000001e8164 torch::aot_inductor::AOTInductorModelContainer::update_constant_buffer(std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, AtenTensorOpaque*, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, AtenTensorOpaque*> > > const&, bool, bool)
```
Test Plan:
1) Generate lowered merge net
```
CUDA_VISIBLE_DEVICES=0 ../buck-out/v2/gen/fbcode/b5b13003c82cbdec/caffe2/torch/fb/model_transform/fx2trt/packaging/__generate_merge_net_file__/generate_merge_net_file.par --action=generate --input-file=/home/shengqin/models/aoti_sigmoid_test/cmf_interformer_with_custom_triton_kernels_691990503_0_input --output-file=/home/shengqin/models/aoti_sigmoid_test/cmf_interformer_with_custom_triton_kernels_691990503_0_output.aoti_sigmoid --lower-backend=aot_inductor --use_sigmoid=true --aot_inductor_config="{'max_autotune': True, 'comprehensive_padding': False}" --add_passes=use_matmul_lce_replace_normal_LCE,use_triton_dot_compress,use_matmul_fuse_lce_replace_first_LCE,use_contiguous_linear_reduction_replace_linear_reduction --disable_acc_tracer=false
```
2) Load net predictor
```
CUDA_VISIBLE_DEVICES=1 ../buck-out/v2/gen/fbcode/103717df3cc2b97a/caffe2/torch/fb/model_transform/fx2trt/packaging/__load_net_predictor__/load_net_predictor --loadMode=AccuracyAB --inputNetFile=/home/shengqin/models/aoti_sigmoid_test/cmf_interformer_with_custom_triton_kernels_691990503_0_output.aoti_ts --otherNetFile=/home/shengqin/models/aoti_sigmoid_test/cmf_interformer_with_custom_triton_kernels_691990503_0_output.aoti_sigmoid --moduleName=merge --benchmarkEnableProfiling=false —-predictor_hardware_type=1 --disableStaticRuntime=true
```
Reviewed By: hl475
Differential Revision: D71236710
| true
|
2,921,546,438
|
[AOTI][XPU] Fix: model_container_runner_xpu.cpp is not built into libtorch_xpu.so
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149175
The missing of model_container_runner_xpu.cpp will cause compilation failure when user build CPP inference application on XPU.
| true
|
2,921,539,670
|
[DO NOT LAND] Padded Tensor PoC
|
BoyuanFeng
|
closed
|
[
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 1
|
CONTRIBUTOR
|
Initial implementation of padded tensor. We use FakeTensor for shape propagation.
## Example 1
```python
import torch
from padded_tensor import PaddedTensor
@torch.compile(fullgraph=True)
def f(x):
x1 = x + 1
return x1 * 2
def run(shape):
x = torch.randn(*shape, device="cuda")
pad_x = PaddedTensor.from_tensor(x, multipliers={0:4, 1:4})
assert hasattr(pad_x, "multipliers"), breakpoint()
out = f(pad_x)
print(f"out.shape:{out.shape}, out.tensor.shape:{out.tensor.shape}")
run((2,3))
run((3,4))
run((5,6))
# out.shape:torch.Size([2, 3]), out.tensor.shape:torch.Size([4, 4])
# out.shape:torch.Size([3, 4]), out.tensor.shape:torch.Size([4, 4])
# out.shape:torch.Size([5, 6]), out.tensor.shape:torch.Size([8, 8])
```
Generated code: [P1756768916](https://www.internalfb.com/phabricator/paste/view/P1756768916)
## Example 2
```python
import torch
from padded_tensor import PaddedTensor
@torch.compile(fullgraph=True, mode="reduce-overhead")
def f(x, y):
x1 = x + 1
return x1 @ y
def run(shape):
x = torch.randn(*shape, device="cuda")
y = torch.randn(*shape[::-1], device="cuda")
pad_x = PaddedTensor.from_tensor(x, multipliers={0:4, 1:4})
pad_y = PaddedTensor.from_tensor(y, multipliers={0:4, 1:4})
out = f(pad_x, pad_y)
print(f"out.shape:{out.shape}, out.tensor.shape:{out.tensor.shape}")
run((2,3))
run((3,4))
run((5,6))
# out.shape:torch.Size([2, 2]), out.tensor.shape:torch.Size([4, 4])
# out.shape:torch.Size([3, 3]), out.tensor.shape:torch.Size([4, 4])
# out.shape:torch.Size([5, 5]), out.tensor.shape:torch.Size([8, 8])
```
Generated code: [P1756771290](https://www.internalfb.com/phabricator/paste/view/P1756771290)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,921,503,187
|
[export] Minor refactor to trace.py
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"merging",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Minor refactor to trace.py
* Removed `_strict_export_lower_to_aten_ir` in favor of just `_strict_export` and `_non_strict_export`
* Matched the APIs of `_strict_export` and `_non_strict_export`
* Instead of a `lower_to_aten_callback` which is a callable, or `dispatch_tracing_mode`, both functions take in a `_to_aten_func` which can be either `_export_to_aten_ir_make_fx` or `_export_to_aten_ir`.
| true
|
2,921,482,553
|
Fix torchbind schema str generation
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary: Fix Torchbind HOP schema generation when there's no input
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r schema
```
Differential Revision: D71231164
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,921,472,944
|
[Reland] First version of statically compiled launcher for triton compiled CUDA kernels
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148890
* __->__ #149238
This is a new version of https://github.com/pytorch/pytorch/pull/148561 fixing the ROCM test failure
Putting this up for a first pass review, though I will likely make a bunch of changes before landing to add more features, etc.
This diff implements a first version of a static CUDA kernel launcher in `torch._C`. The goal here is to take a cubin file and some metadata from a CompiledKernel from `triton`, and launch the cubin file directly.
Background doc: https://docs.google.com/document/d/1rjRcHl6MfauHG30nCoQX-9UKvKyIs4WWMy_GsGyqb9g/edit?tab=t.0#heading=h.ut5lf39lzq66
Normally, using triton's CompiledKernel.make_launcher(), we would pay the cost of codegenning C++ and running it at compile time. With this new approach, we can use one statically compiled library to launch the kernel.
The tradeoff here is that this new kernel launcher will not be able to use codegen to deal with different lengths/types of arguments. So we use templating to handle up to 10 arguments for now. We also allocate 8 bytes on the stack per argument no matter the argument type, which can take more memory than codegenning. On the other hand, we improve compile time on cold and warm start by not having to call the C++ compiler at all.
This diff does not add the launcher to torch, but introduces a basic test suite.
A list of TODOs that are not yet complete:
- Handle `nvTmaDesc` and `cuTensorMap`, which triton handles
- Embed the grid logic instead of passing in gridX,Y,Z
- Handle launch_enter and exit hooks? (Not sure if inductor has these)
- Benchmarking to see if there's runtime performance loss
- Probably lots of features of the triton C++ generated code that I haven't handled yet.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,921,469,668
|
[EZ][BE] Remove cross-compilation options from mac-build.yml
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149237
It has long been gone
| true
|
2,921,467,828
|
[ARM64][CUDA] skip string pattern matching in `test_workspace_allocation_error`
|
eqy
|
closed
|
[
"open source",
"module: arm",
"Merged",
"module: cuda graphs",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
`unwind()` on ARM64 seems to elide the strings of interest
cc @malfet @snadampal @milpuz01 @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,921,447,063
|
[export] specialize for aten.to
|
pianpwk
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"ciflow/inductor",
"release notes: export"
] | 11
|
CONTRIBUTOR
|
Changes decomposition behavior of `aten.to` to respect the aliasing/non-aliasing behavior in eager, and to specialize to the input/conversion dtype & device.
Before change: we always decompose `aten.to` into `_to_copy`, regardless of aliasing behavior. This leads us to ban mutations on the result of `_to_copy` when aliased, since we can't guarantee correct program semantics. This meant users had to explicitly call `.clone()` before mutating. In the special cases where we don’t ban mutations (e.g. dtype conversion), we add runtime assertions on the input & conversion dtype/devices in the decomposed program (see https://github.com/pytorch/pytorch/pull/142420).
After change: we decompose to the aliasing/non-aliasing behavior that matches eager, allowing mutations in all cases. We also add dtype/device assertions for all `aten.to` ops, starting in the pre-dispatch graph, basically specializing the program to the dtype/devices.
Differential Revision: D71229547
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,921,404,957
|
[BE] Parametrize `TestMPS.test_binops_dtype_precedence`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149234
* #149233
No op change, just splits a longer tests into a series of a smaller ones
| true
|
2,921,404,910
|
[MPS] Fix type promotion for `torch.floor_divide`
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149234
* __->__ #149233
And delete some duplicating glue code by relying on the stub
After this change `torch.arange(10, device = 'mps') // torch.arange(10., device='mps')` will return tensor of floats, which is a common dtype for float + integral operation, rather than tensor of ints
Checked by `test_div2` inductor testing
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.