id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,979,018,593
|
Code Clean: Remove python3.8 specific code because PyTorch now need Python3.9 and later
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150839
* #150838
* __->__ #150834
As the title stated.
| true
|
2,978,918,930
|
Pin all root requirements to major versions
|
jondea
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Builds regularly fail due to major changes in build packages (most recently #150149), should we pin all the root [`requirements.txt`](https://github.com/pytorch/pytorch/blob/a106842ea8be6eb17b368de16d9c107c12b809bc/requirements.txt) to at least major version?
I made this a draft because I didn't really know the right solution, but
- [pyproject.toml](https://github.com/pytorch/pytorch/blob/a106842ea8be6eb17b368de16d9c107c12b809bc/pyproject.toml#L2) has different build-requirements to requirements.txt? Which one is canonical? And should we have 2?
- The manylinux CI builds seems to `pip install -r requirements.txt` but the Ubuntu unit testing CI uses `pip install -r requirements-ci.txt`. @malfet [suggested here](https://github.com/pytorch/pytorch/pull/138338#pullrequestreview-2379132987) that we should pin build requirements in CI but not for local development, should we have another set of requirements for just manylinux?
- Should the requirements be baked into the builder Docker images? At least then we can build with known good build dependencies by choosing a specific commit of the builder image (e.g. [cpu-aarch64-af5c1b96e251422ad5fb05f98c1f0095f9c9d1cf](https://hub.docker.com/layers/pytorch/manylinuxaarch64-builder/cpu-aarch64-af5c1b96e251422ad5fb05f98c1f0095f9c9d1cf/images/sha256-f41083e96d23c3d2a1e6777f23fcf371979845eab129c25997f552a6d8023ad4)). At the moment, the CI build scripts do `pip install`, but this could be done in the `Dockerfile`, it would have the added benefit of speeding up the CI and making the builds more reproducible.
| true
|
2,978,905,041
|
[inductor][cpu]functorch_dp_cifar10 AOTInductor AMP multiple thread performance regression in 2025-03-24 nightly release
|
zxd1997066
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>AOTInductor AMP multiple thread static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>functorch_dp_cifar10</td>
<td>multiple</td>
<td>64</td>
<td>1.016286</td>
<td>0.010592308</td>
<td>0.010764814328088</td>
<td>35.067491</td>
<td>64</td>
<td>1.169075</td>
<td>0.008584628</td>
<td>0.010036073979100002</td>
<td>35.173659</td>
<td>0.87</td>
<td>0.93</td>
<td>0.81</td>
<td>1.0</td>
</tr>
</tr>
</tbody>
</table>
the bad commit: c36ac16da181989e32458bf52b5bc8ae99a0bb92
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench functorch_dp_cifar10 amp first static default 0 aot_inductor
Testing with aot_inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval functorch_dp_cifar10
skipping cudagraphs due to cpp wrapper enabled
running benchmark: 100%|████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 25.02it/s]
1.159x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,functorch_dp_cifar10,64,1.159491,17.737401,46.211767,0.718113,74.017178,103.071744,0,0,0,0,0,0,1
```
the last good commit: 109644346737ed094db0b99e9c6dac5ac022e35f
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench functorch_dp_cifar10 amp first static default 0 aot_inductor
Testing with aot_inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval functorch_dp_cifar10
skipping cudagraphs due to cpp wrapper enabled
running benchmark: 100%|████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 27.63it/s]
1.461x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,functorch_dp_cifar10,64,1.461040,14.502100,50.749922,0.721483,73.839411,102.343885,0,0,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>45b11730f10f64171a9861c98782e1875bad87c9</td>
<td>main</td>
<td>f80bee4934dc2d6c8031f481d699cd4832a1a932</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+318bace</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench functorch_dp_cifar10 amp first static default 0 aot_inductor
Suspected guilty commit: c36ac16da181989e32458bf52b5bc8ae99a0bb92
[torchbench-functorch_dp_cifar10-inference-amp-static-default-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/19644830/torchbench-functorch_dp_cifar10-inference-amp-static-default-multiple-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129
| true
|
2,978,897,433
|
[Quant][PT2E][X86] Enable annotation of aten.mul.tensor with X86InductorQuantizer
|
Xia-Weiwen
|
closed
|
[
"open source",
"release notes: quantization",
"intel"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150831
* #151112
**Summary**
This PR adds support of annotation of `aten.mul.tensor` in `X86InductorQuantizer`.
`mul` is not annotated by default. Users need to set the following to enable annotation of `mul`:
```python
quantizer.set_function_type_qconfig(
torch.mul, quantizer.get_global_quantization_config()
)
```
After `convert_pt2e`, users get patterns like
```
quantize_per_tensor_default = torch.ops.quantized_decomposed.quantize_per_tensor.default(x, ...)
dequantize_per_tensor_default = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default, ...)
quantize_per_tensor_default_1 = torch.ops.quantized_decomposed.quantize_per_tensor.default(y, ...);
dequantize_per_tensor_default_1 = torch.ops.quantized_decomposed.dequantize_per_tensor.default(quantize_per_tensor_default_1, ...)
mul = torch.ops.aten.mul.Tensor(dequantize_per_tensor_default, dequantize_per_tensor_default_1);
```
**Test plan**
```
pytest test/quantization/pt2e/test_x86inductor_quantizer.py -k test_annotate_mul_tensor
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,978,891,788
|
[Inductor UT][Break XPU] Fix UTs for XPU broken by community.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150830
* #149862
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,978,771,916
|
[Accelerator][Chore] Use existing `acc` when raising an error
|
shink
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"ciflow/rocm"
] | 4
|
CONTRIBUTOR
|
As the title said, `acc` already exists so we just use it instead of calling `current_accelerator()` again.
cc: @albanD @guangyey @FFFrog
| true
|
2,978,708,667
|
[ez] dynamo fix typo in comment
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151180
* #151179
* __->__ #150828
* #150755
* #150754
* #150753
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,978,706,354
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"keep-going",
"ciflow/xpu",
"release notes: xpu",
"ci-no-td"
] | 29
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [655fa9bc7f88ab5bd3766b5f2fd5b43989c2caca](https://github.com/intel/torch-xpu-ops/commit/655fa9bc7f88ab5bd3766b5f2fd5b43989c2caca), including:
- Update commit pin to xpu-ops main branch
- Fixes batch_norm numeric error by adding additional boundary check
- Enable two operators: fft & jagged_to_padded_dense
- XCCL relevant changes:
1. Cache `cclStream` to improve performance.
2. Add support for complex datatypes in `allgather` and `broadcast`.
3. Support `coalescing` operations and `batch_isend_irecv`.
4. Introduce additional logging; use `export TORCH_CPP_LOG_LEVEL=INFO`.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,978,669,705
|
[Codemod][AddExplicitStrictExportForTrainingInferenceArg] caffe2/torch/ao
|
gmagogsfm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Differential Revision: D72615631
| true
|
2,978,669,379
|
[pytorch] Remove numpy dependency from Knapsack Evaluator
|
basilwong
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Summary:
The two implementations are functionally equivalent. They both calculate the memory budget at the knee point in the Pareto frontier using the same algorithm.
1. np.linspace -> basic list comprehension
2. runtime and memory values -> lists instead of numpy arrays
3. np.ptp -> max - min
4. np.norm -> diff with min value / range
5. np.sqrt -> **0.5
5. np.argmin -> .index(min(_))
Test Plan:
# Unit Testing
```
buck test mode/opt //caffe2/test/functorch:test_ac_knapsack; pingme "tests done"
Buck UI: https://www.internalfb.com/buck2/f4e41eb8-e775-4f04-b4e7-8e567599deb8
Test UI: https://www.internalfb.com/intern/testinfra/testrun/10133099236155875
Network: Up: 24KiB Down: 1.9GiB (reSessionID-7cd11487-f3e7-43ab-982a-805510771c8d)
Executing actions. Remaining 0/259826 98:15:40.5s exec time total
Command: test. Finished 3 local, 5 remote, 103467 cache (99% hit) 98:15:14.8s exec time cached (99%)
Time elapsed: 1:09.9s
Tests finished: Pass 15. Fail 0. Fatal 0. Skip 0. Build failure 0
```
# End to End Testing
### Baseline Run with DP
Let's confirm everything we are running on works.
- Optimization Algo: DP
- Memory Budget: 0.05
- AIX Link: apf_local-basilwong-2025-03-22_20:39:10
- TLParse rank 0: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpDJaWp5/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
- TLParse rank 1: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpDJaWp5/rank_1/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
### Dynamic Memory Budget (Before Change)
- Revision: 2c95489b7f79
- Optimization Algo: Dynamic Memory Budget
- Memory Budget: 0.05
- AIX Link: https://www.internalfb.com/mlhub/pipeline/4088035428184866
- TLParse:
- https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpykEy8U/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
- https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpykEy8U/rank_1/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
### Dynamic Memory Budget (After Change)
- Revision: 14353eef3c9e
- Optimization Algo: Dynamic Memory Budget
- Memory Budget: 0.05
- AIX Link: https://www.internalfb.com/mlhub/pipeline/1613558749306737
- TLParse Links:
- https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpZKNWFw/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
- https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpZKNWFw/rank_1/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
As a sanity check lets take the AC information for the following compile id: 7_0_0 from the rank 0 of each TLParse.
{F1976883124}
* Baseline: P1779400819
* Saved node values show we are storing much more compared to dynamic memory:
```
"Knapsack Saved Nodes": [
16,
17,
19,
20,
21,
22,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60
]
```
* Before Change: P1779401775
* Saved nodes are similar to after change but not exactly.
```
"Knapsack Saved Nodes": [
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
49,
50
]
```
* After Change: P1779402106
* Here we se the largest nodes that are saved are around the same, but there is a small discrepancy for the smallest nodes.
```
"Knapsack Saved Nodes": [
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
50,
51,
57,
58,
59,
60,
61,
62
],
```
The discrepancy can be explained by looking at the estimated memory values. This is the non-deterministic part(below are the top 5 memory values for considered candidates):
```
0.05774741703905514,
0.007333005338292718,
0.007333005338292718,
0.007333005338292718,
0.007333005338292718,
```
vs
```
0.049254204820440746,
0.006254502199421049,
0.006254502199421049,
0.006254502199421049,
0.006254502199421049,
```
Based on that the dynamic memory implementations performed similarly in an E2E test and that memory is non-deterministic we should be good to go to land.
Differential Revision: D71692245
| true
|
2,978,563,434
|
[MPSInductor] Naive welford_reduce implementation
|
malfet
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151155
* #151152
* #151151
* __->__ #150824
* #151042
Literal Python-to-Metal translation of
https://github.com/pytorch/pytorch/blob/85549fe6de3b9a980d1dc98dc57379501bd2bb18/torch/_inductor/runtime/triton_helpers.py#L217-L225
Fixed missing barrier in `welford_combine`
And this is sufficient to make `GPUTests.test_batch_norm_2d_2_mps` to pass
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,978,550,962
|
[export] Decomp failure when running `aten.item.default`
|
kisenaa
|
open
|
[
"module: onnx",
"oncall: pt2",
"oncall: export"
] | 1
|
NONE
|
### 🐛 Describe the bug
Trying to export yolo11 model to onnx with dynamo=True. But got an error:
```
Ultralytics 8.3.103 🚀 Python-3.12.9 torch-2.8.0.dev20250405+cu128 CUDA:0 (NVIDIA GeForce RTX 4080 Laptop GPU, 12282MiB)
YOLO11n summary (fused): 100 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs
PyTorch: starting from 'yolo11n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 300, 6) (5.4 MB)
ONNX: starting export with onnx 1.17.0 opset 18...
D:\learn\venv\Lib\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
D:\learn\venv\Lib\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
D:\learn\venv\Lib\site-packages\torchvision\_meta_registrations.py:173: FutureWarning: `create_unbacked_symint` is deprecated, please use `new_dynamic_size` instead
num_to_keep = ctx.create_unbacked_symint()
ONNX: export failure ❌ 5.0s: Failed to decompose the FX graph for ONNX compatibility. This is step 2/3 of exporting the model to ONNX. Next steps:
- Create an issue in the PyTorch GitHub repository against the *torch.export* component and attach the full error stack as well as reproduction scripts.
- Create an error report with `torch.onnx.export(..., report=True)`, and save the ExportedProgram as a pt2 file. Create an issue in the PyTorch GitHub repository against the *onnx* component. Attach the error report and the pt2 model.
Error report has been saved to 'onnx_export_2025-04-08_11-09-41-177592_decomp.md'.
## Exception summary
<class 'AttributeError'>: 'float' object has no attribute 'node'
While executing %item : [num_users=1] = call_function[target=torch.ops.aten.item.default](args = (%getitem_21,), kwargs = {})
GraphModule: class GraphModule(torch.nn.Module):
```
Traceback:
```
Original traceback:
File "D:\learn\venv\Lib\site-packages\torch\fx\_symbolic_trace.py", line 805, in forward
return _orig_module_call(mod, *args, **kwargs)
File "D:\learn\venv\Lib\site-packages\torch\export\_trace.py", line 1842, in forward
tree_out = mod(*args, **kwargs)
File "D:\learn\venv\Lib\site-packages\torch\fx\_symbolic_trace.py", line 805, in forward
return _orig_module_call(mod, *args, **kwargs)
File "D:\learn\venv\Lib\site-packages\ultralytics\engine\exporter.py", line 1605, in forward
preds = self.model(x)
File "D:\learn\venv\Lib\site-packages\torch\fx\_symbolic_trace.py", line 805, in forward
return _orig_module_call(mod, *args, **kwargs)
File "D:\learn\venv\Lib\site-packages\ultralytics\nn\tasks.py", line 120, in forward
return self.predict(x, *args, **kwargs)
File "D:\learn\venv\Lib\site-packages\torch\fx\_symbolic_trace.py", line 805, in forward
return _orig_module_call(mod, *args, **kwargs)
File "D:\learn\venv\Lib\site-packages\ultralytics\nn\modules\head.py", line 75, in forward
y = self._inference(x)
(Refer to the full stack trace above for more information.)
Traceback (most recent call last):
File "D:\learn\venv\Lib\site-packages\torch\onnx\_internal\exporter\_core.py", line 1335, in export
decomposed_program = _prepare_exported_program_for_export(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\onnx\_internal\exporter\_core.py", line 898, in _prepare_exported_program_for_export
exported_program = _fx_passes.decompose_with_registry(exported_program, registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\onnx\_internal\exporter\_fx_passes.py", line 19, in decompose_with_registry
return exported_program.run_decompositions(decomp_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\export\exported_program.py", line 122, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\export\exported_program.py", line 1382, in run_decompositions
return _decompose_exported_program(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\export\exported_program.py", line 848, in _decompose_exported_program
) = _decompose_and_get_gm_with_new_signature_constants(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\export\exported_program.py", line 467, in _decompose_and_get_gm_with_new_signature_constants
aten_export_artifact = _export_to_aten_ir(
^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\export\_trace.py", line 824, in _export_to_aten_ir
gm, graph_signature = transform(aot_export_module)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\_functorch\aot_autograd.py", line 1353, in aot_export_module
fx_g, metadata, in_spec, out_spec = _aot_export_function(
^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\_functorch\aot_autograd.py", line 1592, in _aot_export_function
fx_g, meta = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\_functorch\aot_autograd.py", line 574, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\_functorch\aot_autograd.py", line 675, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\_functorch\_aot_autograd\collect_metadata_analysis.py", line 198, in inner
flat_f_outs = f(*flat_f_args)
^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\_functorch\_aot_autograd\utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\_functorch\_aot_autograd\traced_function_transforms.py", line 899, in functional_call
out = PropagateUnbackedSymInts(mod).run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\fx\interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "D:\learn\venv\Lib\site-packages\torch\fx\experimental\symbolic_shapes.py", line 7286, in run_node
rebind_unbacked(detect_fake_mode().shape_env, n, result)
File "D:\learn\venv\Lib\site-packages\torch\fx\experimental\symbolic_shapes.py", line 549, in rebind_unbacked
if u1.node.hint is not None:
^^^^^^^
AttributeError: 'float' object has no attribute 'node'
```
code:
```python
from ultralytics import YOLO
# Load the YOLOv11 model
model = YOLO("./yolo11n.pt", task='detect')
# add dynamo=true on the export function D:\learn\packages\ultralytics\engine\exporter.py
model.export(
format="onnx",
nms=True,
iou=0.5,
dynamic=True,
simplify=True,
save=True,
half=True,
device = 'cuda',
)
```
I’ve attached the output and report log files below .
How can I solve this problem? Is this pytorch or yolo library issue ? Thank you
[onnx_export_2025-04-08_11-09-41-177592_decomp.md](https://github.com/user-attachments/files/19642384/onnx_export_2025-04-08_11-09-41-177592_decomp.md)
[output.txt](https://github.com/user-attachments/files/19642385/output.txt)
### Versions
```
Collecting environment information...
PyTorch version: 2.8.0.dev20250405+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home Single Language (10.0.22631 64-bit)
GCC version: (Rev3, Built by MSYS2 project) 14.2.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: N/A
Python version: 3.12.9 (tags/v3.12.9:fdb8142, Feb 4 2025, 15:27:58) [MSC v.1942 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4080 Laptop GPU
Nvidia driver version: 572.83
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cudnn_ops64_9.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Core(TM) Ultra 9 185H
Manufacturer: GenuineIntel
Family: 1
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2300
MaxClockSpeed: 2500
L2CacheSize: 18432
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxscript==0.2.3
[pip3] onnxslim==0.1.49
[pip3] torch==2.8.0.dev20250405+cu128
[pip3] torchaudio==2.6.0.dev20250406+cu128
[pip3] torchvision==0.22.0.dev20250406+cu128
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,978,511,972
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_int64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 5
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_int64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40141680405).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_int64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 228, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 91, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs', keys=('aten::_foreach_abs', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1975, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 235, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3156, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.int64], Tensor[size=(19, 19), device="cuda:0", dtype=torch.int64], Tensor[size=(18, 18), device="cuda:0", dtype=torch.int64], Tensor[size=(17, 17), device="cuda:0", dtype=torch.int64], Tensor[size=(16, 16), device="cuda:0", dtype=torch.int64], Tensor[size=(15, 15), device="cuda:0", dtype=torch.int64], Tensor[size=(14, 14), device="cuda:0", dtype=torch.int64], Tensor[size=(13, 13), device="cuda:0", dtype=torch.int64], Tensor[size=(12, 12), device="cuda:0", dtype=torch.int64], Tensor[size=(11, 11), device="cuda:0", dtype=torch.int64], Tensor[size=(10, 10), device="cuda:0", dtype=torch.int64], Tensor[size=(9, 9), device="cuda:0", dtype=torch.int64], Tensor[size=(8, 8), device="cuda:0", dtype=torch.int64], Tensor[size=(7, 7), device="cuda:0", dtype=torch.int64], Tensor[size=(6, 6), device="cuda:0", dtype=torch.int64], Tensor[size=(5, 5), device="cuda:0", dtype=torch.int64], Tensor[size=(4, 4), device="cuda:0", dtype=torch.int64], Tensor[size=(3, 3), device="cuda:0", dtype=torch.int64], Tensor[size=(2, 2), device="cuda:0", dtype=torch.int64], Tensor[size=(1, 1), device="cuda:0", dtype=torch.int64]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_outplace_cuda_int64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,978,479,909
|
[CI] Run test_torchinductor for MPS device
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150821
There are only 118 failures atm, mark them all with xfail to avoid new regressions
Add `xfail_if_mps_unimplemented` decorator to distinguish between tests that call unimplemented eager op vs ones that fail for some other reason.
Added `aten._scaled_dot_product_attention_math_for_mps` fallback to make test behavior consistent between MacOS-15 (where falback is in place) and MacOS-14
Weird MacOS-14 specific skips:
- test_torchinductor.py::GPUTests::test_cat_extern_kernel_mps
- test_torchinductor.py::GPUTests::test_sort_transpose_mps (likely an eager bug)
- test_torchinductor.py::GPUTests::test_unaligned_input_mps
Numerous MacOS-13 skips, including few eager hard crashes, for example running `test_torchinductor.py::GPUTests::test_scatter5_mps` causes
```
/AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSNDArray/Kernels/MPSNDArrayScatter.mm:309: failed assertion `Rank of destination array (1) must be greater than or equal to inner-most dimension of indices array (3)'
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,978,471,498
|
[Manylinux 2.28] Correct Linux aarch64 cuda binaries wheel name
|
pytorchbot
|
closed
|
[] | 1
|
COLLABORATOR
|
Related to: https://github.com/pytorch/pytorch/issues/149044#issuecomment-2784044555
For CPU binaries we run auditwheel however for cuda binaries auditwheel produces invalid results . Hence we need to rename the file.
| true
|
2,978,458,541
|
Optimize `ConvTranspose2d` stride description
|
zeshengzong
|
closed
|
[
"module: nn",
"module: convolution",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: docs"
] | 10
|
CONTRIBUTOR
|
Fixes #150775
## Test Result
### Before

### After

cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,978,352,998
|
[CUDA] Only use vec128 if CUDA version is newer than 12.8
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
By addressing a feedback requested at https://github.com/pytorch/pytorch/pull/145746
| true
|
2,978,330,968
|
Expose bicubic mode for torch::nn::functional::grid_sample in LibTorch
|
inventshah
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp"
] | 18
|
CONTRIBUTOR
|
When bicubic interpolation was added to grid_sampler in #44780, `GridSampleFuncOptions` was not updated to allow a user to use bicubic mode in LibTorch, even though the function could handle it. This PR fixes the parity such that LibTorch's `torch::nn::functional::grid_sample` behaves the same as PyTorch's `torch.nn.functional.grid_sample`.
Existing users can directly use `torch::grid_sampler` but must know what int to pass for the interpolation (2 for bicubic) and padding mode parameters, which is not ideal.
| true
|
2,978,310,493
|
Do not depend on numpy during the import
|
basilwong
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary:
Related issue: https://github.com/pytorch/pytorch/issues/149681
We can follow up with a different implementation that does not use numpy(potentially with Torch primitives).
Test Plan:
pending:
contbuild & OSS CI
Differential Revision: D72609835
| true
|
2,978,303,356
|
[C10D] Document object collectives limitations
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150880
* __->__ #150815
Adds louder warning labels in the doc page and docstring for object
collectives in hopes of raising awareness of several footgun issues
including accidental creation of cuda contexts by serializing and
sending 'device-local' gpu tensors over the object-* apis.
Preview:
<img width="902" alt="image" src="https://github.com/user-attachments/assets/e0c08c70-d8e5-4e15-b3e2-5cd563714f71" />
addresses #150798
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @d4l3k
| true
|
2,978,271,107
|
[graph partition] reorder to reduce #partitions for simple dependencies
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
This PR reduces #graph partitions by reordering nodes when the `should_partition` nodes have simple dependencies. Specifically, for `should_partition` nodes:
a. If a node has no dependency or only depends on graph inputs: move to the front. Use case is when we move symints to cuda tensor for PaddedTensorSubclass
b. If the only user of a node is OutputNode: move it to the end.
#### Example
The following example shows a padded tensor subclass use case where we copy symint to a cuda tensor (aka mask) in the middle of function. Reordering still generates 1 cudagraph by moving the mask to the front.
```python
import torch
torch._inductor.config.graph_partition = True
# Two reasons for this:
# 1. We want to reuse the same mask for many masked_fill calls
# 2. Prevent inductor from fusing this op into other ops (e.g. masked_fill)
# so we can still reorder in scheduler
@torch.library.custom_op("mylib::create_mask", mutates_args=(), tags=(torch._C.Tag.cudagraph_unsafe,))
def create_mask(padded_size: int, original_size: int, device: torch.device) -> torch.Tensor:
mask = torch.zeros((padded_size,), dtype=torch.bool, device=device)
mask[original_size:] = True
return mask
@create_mask.register_fake
def _(padded_size, original_size, device):
return torch.empty((padded_size,), dtype=torch.bool, device=device)
def f(padded_tensor, original_tensor, weight):
original_size = original_tensor.size()[0]
padded_size = padded_tensor.size()[0]
# element wise op so we don't care padding value
padded_tensor = padded_tensor + 1
padded_tensor = torch.nn.functional.relu(padded_tensor)
# dot product requires padding with 0
dot_res = padded_tensor.dot(weight)
padded_tensor += dot_res
# min requires padding with inf, so we create mask now
mask = create_mask(padded_size, original_size, padded_tensor.device)
min_res = torch.min(
torch.ops.aten.masked_fill(padded_tensor, mask, float("inf"))
)
# max requires padding with inf. we can reuse previous mask
max_res = torch.max(
torch.ops.aten.masked_fill(padded_tensor, mask, -float("inf"))
)
return min_res+max_res+padded_tensor
compiled_f = torch.compile(f, mode="reduce-overhead")
def run(padded_size, original_size):
padded_tensor = torch.randn(padded_size, device="cuda")
padded_tensor[original_size:] = 0
original_tensor = torch.randn(original_size, device="meta")
weight = torch.randn(padded_size, device="cuda")
eager_out = f(padded_tensor, original_tensor, weight)
compiled_out = compiled_f(padded_tensor, original_tensor, weight)
assert torch.allclose(eager_out[0], compiled_out[0])
assert torch.allclose(eager_out[1], compiled_out[1])
# new cudagraph
run(8, 4)
# new cudagraph due to recompile
run(8, 6)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,978,256,630
|
add reduce_scatter to symm mem ops
|
ngimel
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
COLLABORATOR
|
+ a few small fixes (don't error out on 0-element tensors, a few more checks for contiguous outputs, more threads for better perf).
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @xw285cornell
| true
|
2,978,250,998
|
[CUDA][cuBLAS] Aten GEMM overload for FP32 output from FP16/BF16 inputs
|
PaulZhang12
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150812
Enable FP32 output from FP16/BF16 GEMMs in aten with cuBLAS. Accumulation for these GEMMs are generally already done in FP32. Adds the functionality to the following aten operators:
* mm
* bmm
* addmm
* baddmm
Follow up of customer issue: https://github.com/pytorch/pytorch/issues/146241#issuecomment-2781889390
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D73126191](https://our.internmc.facebook.com/intern/diff/D73126191)
| true
|
2,978,233,244
|
Pytorch. is_impure() does not take any argument. Removed it
|
elpdumont
|
open
|
[
"fb-exported",
"release notes: fx",
"fx"
] | 4
|
NONE
|
Summary:
D72427768 introduced an argument when calling `is_impure` (defined here: https://www.internalfb.com/code/fbsource/[00b3734ebfa7]/arvr/libraries/art/python/third_party/_python3.7/_win64/torch/fx/node.py?lines=509)
This broke our conveyor:
https://fb.workplace.com/groups/CTRLEngSupport/permalink/4045843202402092/
We removed the argument.
Test Plan:
`pte flow configs/pipelines/f4/releases/p1r/f4_pp_20250317_release enable_fast_run=True`
https://internalfb.com/intern/fblearner/details/718510504/
Differential Revision: D72605617
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,978,151,861
|
[dynamo][guards] Print relational guards only once
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #140756
* __->__ #150810
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,978,126,801
|
[export] Integrate meta kernel generation with draft-export
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
If a custom operator does not contain a fake impl, currently draft-export will use the real-tensor propagation to get an output for the operator and continue tracing. However if we retrace the exported model using `ep.run_decompositions`, or `export`, or run the exported program with fake tensors, we'll still fail because there's no fake impl.
With this PR, after draft-export we will generate an operator profile for each operator call that we encounter, and store this on the report attached to the exported program `ep._report.op_profiles`. Users can then use `torch._library.fake_profile.register_fake_profile` to temporarily generate and register a fake impl based on these operator profiles. This way future fake tensor retracing will work.
The workflow would look something like:
```python
class M(torch.nn.Module):
def forward(self, a, b):
res = torch.ops.mylib.foo8(a, b) # no fake impl
return res
ep = export(M(), (torch.ones(3, 4), torch.ones(3, 4)) # this fails bc no fake impl
ep = draft_export(M(), (torch.ones(3, 4), torch.ones(3, 4))
ep.run_decompositions() # this fails bc no fake impl
# this registers fake impls based on the profiles
with torch._library.fake_profile.register_fake_profile(ep._report.op_profiles):
decomp = ep.run_decompositions() # this works
new_inp = (
torch.ones(2, 3, 4),
torch.ones(2, 3, 4),
)
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150809
| true
|
2,978,126,717
|
Fix assert_tensor_meta
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150809
* __->__ #150808
* #150807
* #150806
| true
|
2,978,126,637
|
Generate meta kernel with operator profiles
|
angelayi
|
closed
|
[
"module: custom-operators",
"Merged",
"release notes: composability"
] | 2
|
CONTRIBUTOR
|
Added a context manager, `torch._library.fake_profile.register_fake_profile(op_profiles)`, where given an operator profile, it will generate and register a fake impl for the operator based on the operator profile.
The input to `register_fake_profile` is a dictionary mapping operator name to a set of profiles which describe the input and outputs of the operator. Here's an example of a profile for `mylib.foo.default`:
```
"mylib.foo.default": {
OpProfile(
args_profile=(
TensorMetadata(rank=2, dtype=torch.float32, device=torch.device("cpu"), layout=torch.strided,),
TensorMetadata(rank=2, dtype=torch.float32, device=torch.device("cpu"), layout=torch.strided,),
),
out_profile=TensorMetadata(rank=2, dtype=torch.float32, device=torch.device("cpu"), layout=torch.strided,),
)
}
```
`foo`'s profile contains only one profile, which says that for 2 input tensors of rank 2, dtype float32, device cpu, we will return one tensor of rank 2, dtype float32, and device cpu.
This will then generate a fake kernel where given 2 input tensors of rank 2 (and the other tensor metadata), we will output one tensor of rank 2 (and the other tensor metadata). If the operator also supports other input ranks, then we can add to the profile for the fake impl to support more input types.
This profile can either be manually written or created by draft-export, and then checked into the codebase.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150809
* #150808
* __->__ #150807
* #150806
| true
|
2,978,126,545
|
[custom ops] Override fake registration
|
angelayi
|
closed
|
[
"module: custom-operators",
"Merged",
"ciflow/trunk",
"release notes: composability"
] | 3
|
CONTRIBUTOR
|
Added a flag, `allow_override`, to allow overriding existing kernel implementations in `torch.library.register_fake` `library.impl`. The default is false, where if a user tries to register a kernel to a dispatch key that already contains a kernel, it will error. This flag doesn't apply to CustomOpDefs, where overriding a fake kernel is already allowed.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150809
* #150808
* #150807
* __->__ #150806
| true
|
2,978,125,024
|
ONNX cannot save the XGBoost binary classifier properly when trained on an imbalanced dataset.
|
cugurm
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
ONNX cannot properly save an XGBoost binary classification model when it is trained on an imbalanced dataset.
When I create the dataset for the XGBoost binary classification model like this:
```
n_instances, n_features = 100_000, 300
X = np.random.rand(n_instances, n_features)
y = np.random.randint(0, 2, size=(n_instances,)) # Binary labels (0 or 1)
```
I am able to save the trained model to ONNX, load it, and make predictions that match the original model.
However, when I create the training dataset with imbalanced labels like this:
```
n_instances, n_features = 100_000, 300
X = np.random.rand(n_instances, n_features)
class_0_count, class_1_count = 90_000, 10_000
y = np.concatenate([np.zeros(class_0_count), np.ones(class_1_count)])
np.random.shuffle(y)
```
saving the model to ONNX and loading it results in predictions that differ from the original model.
Reproducer:
```
import numpy as np
import onnxruntime as rt
from sklearn.datasets import load_iris
from xgboost import XGBClassifier
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
from skl2onnx import update_registered_converter
from skl2onnx.common.shape_calculator import calculate_linear_classifier_output_shapes
from onnxmltools.convert.xgboost.operator_converters.XGBoost import convert_xgboost
def convert_xgboost_pipeline_to_onnx(X, y, n_features, n_test_instances=3):
# Train XGBoost classifier
pipe = XGBClassifier(objective='binary:logistic', eval_metric='logloss', n_estimators=500, max_depth=5, reg_lambda=1, reg_alpha=0)
pipe.fit(X, y)
# Register the ONNX converter for XGBClassifier
update_registered_converter(
XGBClassifier,
"XGBoostXGBClassifier",
calculate_linear_classifier_output_shapes,
convert_xgboost,
options={"nocl": [True, False], "zipmap": [True, False, "columns"]},
)
# Convert the model to ONNX
model_onnx = convert_sklearn(
pipe,
"pipeline_xgboost",
[("input", FloatTensorType([None, n_features]))],
target_opset={"": 12, "ai.onnx.ml": 2},
options={"zipmap": False, "nocl": False},
)
# Save the ONNX model
with open("pipeline_xgboost.onnx", "wb") as f:
f.write(model_onnx.SerializeToString())
# Compare predictions
print("XGBoost predict:", pipe.predict(X[:n_test_instances]))
print("XGBoost predict_proba:", pipe.predict_proba(X[:n_test_instances]))
# Predictions with ONNX Runtime
sess = rt.InferenceSession("pipeline_xgboost.onnx", providers=["CPUExecutionProvider"])
pred_onx = sess.run(None, {"input": X[:n_test_instances].astype(np.float32)})
print("ONNX predict:", pred_onx[0])
print("ONNX predict_proba:", pred_onx[1])
if __name__ == "__main__":
n_instances, n_features = 100_000, 300
X = np.random.rand(n_instances, n_features)
y = np.random.randint(0, 2, size=(n_instances,)) # Binary labels (0 or 1)
convert_xgboost_pipeline_to_onnx(X, y, n_features)
class_0_count, class_1_count = 90_000, 10_000
y = np.concatenate([np.zeros(class_0_count), np.ones(class_1_count)])
np.random.shuffle(y)
convert_xgboost_pipeline_to_onnx(X, y, n_features)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 2000 Ada Generation Laptop GPU
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 2
CPU max MHz: 5400.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu11==11.7.101
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu11==10.2.10.91
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu11==11.4.0.1
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu11==11.7.4.91
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu11==2.14.3
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] nvidia-nvjitlink-cu12==12.4.99
[pip3] nvidia-nvtx-cu11==11.7.91
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnx==1.14.0
[pip3] onnxconverter-common==1.14.0
[pip3] onnxmltools==1.11.2
[pip3] onnxruntime==1.20.0
[pip3] skl2onnx==1.15.0
[pip3] torch==2.0.1
[pip3] torch-geometric==2.3.1
[pip3] torchmetrics==1.1.1
[pip3] triton==2.0.0
[conda] Could not collect
```
| true
|
2,978,109,353
|
[Inductor] assert fallback output alignment
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150804
* #150777
Previous PR (https://github.com/pytorch/pytorch/pull/150777) fixes the alignment problem for fallback kernel assuming meta kernel is correct. This PR handles the case that meta kernel is incorrect. Assertion is added if the compiler assumes a fallback kernel output is aligned.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,978,101,665
|
TEST CACHE
|
muchulee8
|
closed
|
[
"topic: not user facing",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150803
* #150276
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
| true
|
2,978,092,419
|
Fix `-Wmissing-braces` in a few files
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: sparse"
] | 8
|
CONTRIBUTOR
|
Test Plan: Sandcastle
Reviewed By: wenxin0319
| true
|
2,978,084,429
|
ProcessGroupGloo: support lazy_init
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ci-no-td"
] | 16
|
MEMBER
|
This adds lazy initialization support to ProcessGroupGloo via `TORCH_GLOO_LAZY_INIT` or via `create_device(..., lazy_init=True)`
This is still a draft PR as there's one race condition when doing coalesced operations that needs to be fixed upstream in Gloo first. Depends on https://github.com/facebookincubator/gloo/pull/427 landing first
This also updates the gloo submodule to include the required changes.
Test plan:
added lazy init test variants
```
pytest -v test/distributed/test_c10d_gloo.py -k Lazy
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
2,978,060,210
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_int32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_int32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40123681853).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_int32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,978,036,797
|
FSDP in hybrid mode throws _saved_grad_shard error when backward is called on cross-rank all-gathered loss
|
TianyiXiong1998
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 3
|
NONE
|
Hi, I’m encountering a gradient error when using FSDP in hybrid sharding mode (i.e., ShardingStrategy.HYBRID_SHARD) during training. Here’s the setup and problem:
Setup:
• I am training with multiple ensemble members, distributed across ranks.
• Each rank holds 1 or more ensemble members.
• After local predictions are made, I use all_gather_into_tensor to gather all ensemble outputs across ranks.
• On rank 0, I compute the loss based on the full gathered ensemble.
• Then, I try to call .backward() on the loss computed from the gathered predictions.
Code:
for data, target in train_loader:
# Prepare input for multiple ensemble members on this rank
local_preds = []
for m in range(num_local_ensemble_members):
pred = model(input_data, ...)
local_preds.append(pred.unsqueeze(1))
local_preds_tensor = torch.cat(local_preds, dim=1)
# Pad predictions on this rank to max ensemble size per rank
padded_preds = pad_to_max_ensemble_size(local_preds_tensor)
# All-gather ensemble predictions across all ranks
all_preds = torch.empty(gather_shape)
dist.all_gather_into_tensor(all_preds, padded_preds)
step_ens = reconstruct_from_gathered(all_preds)
# Compute loss on rank 0 using full ensemble prediction
if rank == 0:
step_loss = loss_fn(step_ens, target)
# Backward on rank 0
optimizer.zero_grad()
step_loss.backward() # <-- ❗ This raises "_saved_grad_shard" error
optimizer.step()
Problem:
[rank0]: Traceback (most recent call last):
[rank0]: File "conda-envs/conda_env/lib/python3.11/site-packages/torch/distributed/fsdp/_runtime_utils.py", line 1182, in _finalize_params
[rank0]: handle.prepare_gradient_for_optim()
[rank0]: File "/conda-envs/conda_env/lib/python3.11/site-packages/torch/distributed/fsdp/_flat_param.py", line 1636, in prepare_gradient_for_optim
[rank0]: _p_assert(
[rank0]: File "/conda-envs/conda_env/lib/python3.11/site-packages/torch/distributed/utils.py", line 146, in _p_assert
[rank0]: raise AssertionError(s)
[rank0]: AssertionError: All sharded parameters that received a gradient in the post-backward should use `_saved_grad_shard`
This only happens when:
• I use FSDP with HYBRID_SHARD or SHARD_GRAD_OP.
• Loss is computed from all_gather-ed predictions that were not forward-passed on this rank.
It works fine if:
• I use NO_SHARD strategy.
• Or compute loss and backward based only on the local forward outputs.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360
| true
|
2,978,030,056
|
`all_gather_object` creates context for each gpu multiple times (leaks memory)
|
stas00
|
closed
|
[
"oncall: distributed"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When using `all_gather_object` it leaks many GBs of memory with 8 gpus the first time it's being used (no problem with `all_gather`) - it creates a new context for each gpu - so 7 times too many with 8 gpus. (64 contexts instead of 8 - can be observed with `nvidia-smi` showing 64 entries instead of 8)
repro program:
[dist-mem-test2.txt](https://github.com/user-attachments/files/19639305/dist-mem-test2.txt)
repro log:
[dist-mem-test2.log](https://github.com/user-attachments/files/19639299/dist-mem-test2.log)
CC: @wconstab
### Versions
pt-2.6
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @chauhang @penguinwu
| true
|
2,977,969,306
|
Add CPython tests for iter/sort
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152015
* __->__ #150797
* #150796
* #150795
* #150794
* #150793
* #150791
* #150790
* #150789
* #150788
Tests:
* test_iter.py
* test_sort.py
| true
|
2,977,969,165
|
Add CPython generator/contextlib tests
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152015
* #150797
* __->__ #150796
* #150795
* #150794
* #150793
* #150791
* #150790
* #150789
* #150788
Tests:
* test_generator.py
* test_generator_stop.py
* test_contextlib.py
| true
|
2,977,969,017
|
Add CPython int/float tests
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152015
* #150797
* #150796
* __->__ #150795
* #150794
* #150793
* #150791
* #150790
* #150789
* #150788
Tests:
* test_int.py
* test_int_literal.py
* test_float.py
| true
|
2,977,968,813
|
Add CPython math/cmath tests
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152015
* #150797
* #150796
* #150795
* __->__ #150794
* #150793
* #150791
* #150790
* #150789
* #150788
Tests:
* test_math.py
* test_cmath.py
| true
|
2,977,968,653
|
Add CPython string tests
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152015
* #150797
* #150796
* #150795
* #150794
* __->__ #150793
* #150791
* #150790
* #150789
* #150788
Files:
* test_grammar.py
* test_string.py
* test_userstring.py
| true
|
2,977,968,495
|
[Set] Add CPython set tests
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* __->__ #150792
* #152900
* #153070
Tests:
* test_set.py
cc @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,977,968,335
|
Add CPython dict tests
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152015
* #150797
* #150796
* #150795
* #150794
* #150793
* __->__ #150791
* #150790
* #150789
* #150788
Tests:
* test_dict.py
* test_ordered_dict.py
* test_userdict.py
| true
|
2,977,968,186
|
Add CPython list/tuple tests
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152015
* #150797
* #150796
* #150795
* #150794
* #150793
* #150791
* __->__ #150790
* #150789
* #150788
Tests:
* test_list.py
* test_tuple.py
* test_userlist.py
| true
|
2,977,968,046
|
Add CPython exception tests
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152015
* #150797
* #150796
* #150795
* #150794
* #150793
* #150791
* #150790
* __->__ #150789
* #150788
----
* test_baseexception.py
* test_exceptions.py
* test_exception_variations.py
* test_raise.py
* test_sys.py
| true
|
2,977,967,910
|
Add CPython tests for unittest
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152015
* #150797
* #150796
* #150795
* #150794
* #150793
* #150791
* #150790
* #150789
* __->__ #150788
Tests:
* test_assertions.py
| true
|
2,977,967,759
|
Add infra to run CPython tests under Dynamo
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks",
"module: dynamo",
"ciflow/inductor",
"ci-no-td",
"skip-url-lint"
] | 25
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150787
cc @albanD @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,977,938,223
|
[Manylinux 2.28] Correct Linux aarch64 cuda binaries wheel name
|
atalman
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Related to: https://github.com/pytorch/pytorch/issues/149044#issuecomment-2784044555
For CPU binaries we run auditwheel however for cuda binaries auditwheel produces invalid results . Hence we need to rename the file.
| true
|
2,977,929,091
|
[docs] remove --recursive flag from readme
|
danielvegamyhre
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: docs",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Fixes #150745
See https://github.com/pytorch/pytorch/issues/150745#issuecomment-2784216663
Cloning with `--recursive` as shown in the docs prevents users from checking out commits from before NCCL was removed as a submodule.
| true
|
2,977,917,656
|
[Kineto] Enable OOM observer
|
mzzchy
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary:
# Context:
To support the investigation of OOM issue of shampoo optimizer, we want to enable OOM observer to allow memento to export the snapshot when OOM happens to figure out what has been allocated/freed before it.
Test Plan:
Run this test with next diff.
```
buck run @//mode/opt kineto/libkineto/fb/mtia/integration_tests:mtia_memory_auto_trace_test
```
https://fburl.com/pytorch_memory_visualizer/vsja3a5c
Differential Revision: D71993315
| true
|
2,977,903,919
|
[BE] Fix Amp.metal compilation warning
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Deleting unused `uint tid` fixes
```
[114/1416] Compiling /Users/nshulga/git/pytorch/pytorch/aten/src/ATen/native/mps/kernels/Amp.metal to Amp_30.air
/Users/nshulga/git/pytorch/pytorch/aten/src/ATen/native/mps/kernels/Amp.metal:70:10: warning: unused parameter 'tid' [-Wunused-parameter]
uint tid [[thread_position_in_grid]]) {
^
1 warning generated.
```
| true
|
2,977,879,921
|
[invoke_subgraph] Preserve node meta
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150717
* __->__ #150782
* #150666
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,977,842,646
|
[cutlass backend] Stop using GenerateSM80 for SM90 and SM100
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150781
Not urgent.
We don't use the GenerateSM80 ops I believe.
For SM100, we could skip SM90 as well. But I don't have data for that.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,977,829,899
|
[MPS] Support ArgumentBuffer bindings from C++/Python
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150780
To workaround limitation of 32-arguments per kernel and being able to eventually compile something like
```python
import torch
def foo(*args):
rc = torch.empty_like(args[0])
for arg in args:
rc += arg
return rc
tensors = torch.rand(100, 32, device='mps').unbind(0)
print(torch.compile(foo)(*tensors))
```
For now, introduce `at::native::metal::get_tensor_gpu_address` and use it from both C++ test and compile_shader to convert list of tensors to list of pointers valid on GPU.
Initially this binding were done via `id< MTLArgumentEncoder>`, but according to [Improving CPU Performance by Using Argument Buffers](https://developer.apple.com/documentation/metal/improving-cpu-performance-by-using-argument-buffers?language=objc#Encode-Resources-into-Argument-Buffers) article, this is not necessary when targeting Tier2-only devices (which is true of all devices on MacOS-13 or newer):
> To directly encode the argument buffer resources on these Tier 2 devices, write the [MTLBuffer](https://developer.apple.com/documentation/metal/mtlbuffer?language=objc).[gpuAddress](https://developer.apple.com/documentation/metal/mtlbuffer/gpuaddress?language=objc) property — and for other resource types (samplers, textures, and acceleration structures), the [gpuResourceID](https://developer.apple.com/documentation/metal/mtlcomputepipelinestate/gpuresourceid?language=objc) property — into the corresponding structure member. To encode offsets, treat these property values as uint64 types and add the offset to them.
Add both C++ and PyThon unittests that validate that this works.
Please note, that using either ArgumentEncoder or directly encoding the data does not guarantee buffer will not be freed until shader execution is complete. On the other hand, this should already be guaranteed by MPSCachingAllocator that would only free the memory after all streams completed its execution.
| true
|
2,977,785,538
|
Decorator `skipIfXpu` disables tests when used on class
|
exclamaforte
|
open
|
[
"high priority",
"module: ci",
"module: tests",
"triaged",
"module: regression",
"module: testing"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
`skipIfXpu` is used on classes, for example in `test_autoheuristic.py`:
```python
@skipIfXpu(msg="AutoHeuristic doesn't currently work on the XPU stack")
class AutoHeuristicTest(TestCase):
```
If you try to run the tests:
```
(pytorch) $ python test_autoheuristic.py
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
```
No tests found:
```
(pytorch) $ python test_autoheuristic.py --discover-tests
<unittest.suite.TestSuite tests=[]>
```
Running a class member function with skipIfXpu seems to work, however:
```
(pytorch) $ python test_aot_inductor.py -k test_fp8
../home/gabeferns/pt-envs/pytorch/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /home/gabeferns/pt-envs/pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
W0407 12:19:41.756000 1780910 torch/_export/__init__.py:67] +============================+
W0407 12:19:41.756000 1780910 torch/_export/__init__.py:68] | !!! WARNING !!! |
W0407 12:19:41.756000 1780910 torch/_export/__init__.py:69] +============================+
W0407 12:19:41.757000 1780910 torch/_export/__init__.py:70] torch._export.aot_compile()/torch._export.aot_load() is being deprecated, please switch to directly calling torch._inductor.aoti_compile_and_package(torch.export.export())/torch._inductor.aoti_load_package() instead.
stats [('calls_captured', 2), ('unique_graphs', 1)]
inductor [('async_compile_cache_miss', 1), ('extern_calls', 1), ('async_compile_cache_hit', 1)]
graph_break []
aten_mm_info [('aten._scaled_mm.default_s0_32_16', 1)]
./home/gabeferns/pt-envs/pytorch/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /home/gabeferns/pt-envs/pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
stats [('calls_captured', 2), ('unique_graphs', 1)]
inductor [('extern_calls', 1)]
graph_break []
aten_mm_info [('aten._scaled_mm.default_s0_32_16', 1)]
.
----------------------------------------------------------------------
Ran 4 tests in 12.083s
OK
```
### Versions
h100 devserver
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @ZainRizvi @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,977,741,109
|
Add config option to force disable CompiledTritonKernel cache
|
jamesjwu
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150778
We're unfortunately still seeing some flakiness internally in specific internal models: adding a disable CompiledTritonKernels cache feature to help mitigate.
The issue seems to be sucluded to this specific model: StaticCudaLauncher could also help alleviate it, though I haven't had the permissions to be able to test yet. It's unclear to me if this will definitively fix the issue for the job, but we can test and if it doesn't, we'll have removed another possible cause.
Differential Revision: [D72584099](https://our.internmc.facebook.com/intern/diff/D72584099/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,977,619,534
|
[Inductor] fix alignement assumption for fallback
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150804
* __->__ #150777
Inductor right now only works properly for fallback kernels producing aligned output.
When Inductor create layout for fallback kernel output, Inductor does not add the tensor offset to the layout [link](https://github.com/pytorch/pytorch/blob/2a1e2b88ed7bf7d7436b741ee0c3a2297d7d7bc2/torch/_inductor/ir.py#L6935-L6941). Thus unaligned output will be treated as aligned. Adding the offset to the layout directly does not work since that change the index expression in the generated kernel and we may 'double' applying the offset. Triton already considers the offset when passing in the data_ptr.
To solve this issue, we track the unaligned buffer names instead.
This potentially can fix the internal issues we are debugging here: https://fb.workplace.com/groups/1075192433118967/permalink/1618308128807392/
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D72600784](https://our.internmc.facebook.com/intern/diff/D72600784)
| true
|
2,977,492,712
|
[Async TP] reshape error for output of fused scaled_mm reduce scatter in certain case
|
danielvegamyhre
|
closed
|
[
"oncall: distributed"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Can't post stack trace since it is internal code, but the error is thrown on this line: https://github.com/pytorch/pytorch/blob/06e9deabb623e004eb6024e703a976c5748d51e6/torch/distributed/_symmetric_memory/__init__.py#L1331
The error states the target tensor size is not compatible with the target shape of the view op.
This is strange because this code works with torchtitan async TP and all unit tests are passing. So the internal code is hitting some edge case that doesn't occur in torchtitan or our tests.
### Versions
Pytorch nightly
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,977,492,288
|
ConvTranspose2d documentation should clarify behavior of stride > 1 (zero insertion)
|
EduardoLawson1
|
closed
|
[
"module: docs",
"module: nn",
"module: convolution",
"triaged",
"actionable"
] | 2
|
NONE
|
### 📚 The doc issue
## 📌 Feature Request: Improve `ConvTranspose2d` Documentation (Stride > 1)
### Summary
Currently, the documentation for `torch.nn.ConvTranspose2d` does not clearly explain the behavior of the layer when `stride > 1`. In particular, it omits the fact that transposed convolutions with `stride > 1` insert zeros between input values (zero-insertion) before applying the convolution kernel. This behavior is fundamental to understanding the spatial upsampling performed by this layer.
### What’s Missing
There is no mention in the current documentation about:
- The insertion of zeros between input elements when `stride > 1`
- How this zero-insertion affects the output shape and kernel application
- That this is standard behavior for transposed convolutions (a.k.a. fractionally-strided convolutions)
This causes confusion for users, especially those new to transposed convolutions, who expect the behavior to be more analogous to `nn.Upsample` or other interpolation methods.
### Suggest a potential alternative/fix
### Suggested Improvement
Please consider adding a short explanation such as:
> “When `stride > 1`, `ConvTranspose2d` effectively inserts zeros between input elements along the spatial dimensions before applying the convolution kernel. This allows the layer to increase spatial resolution and is equivalent to a learned upsampling operation.”
Additionally, a simple visual example or reference to relevant literature
cc @svekars @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,977,451,209
|
[Async TP] use original output shape determined by reshape node
|
danielvegamyhre
|
closed
|
[
"oncall: distributed"
] | 2
|
CONTRIBUTOR
|
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,977,418,574
|
[cuda] Add new faster gammabeta backward kernel (#148605) (Reapply with launch bounds)
|
ahmadsharif1
|
closed
|
[
"ciflow/trunk",
"release notes: nn"
] | 2
|
CONTRIBUTOR
|
This is another attempt at re-applying because https://github.com/pytorch/pytorch/pull/150625 was reverted due to internal build failure which should now be resolved.
# Changes over the previous PR
This reverts commit 61a1f09 and adds `__launch_bounds__` to the kernel.
Previously I merged 114d404 that did not work on Blackwell because it consumed too many registers. It got reverted in 61a1f09. For more context see: https://github.com/pytorch/pytorch/issues/150266.
This PR reverts the revert (i.e. reapplies the original diff), with one additional line with `__launch_bounds__` added:
```
git diff HEAD^
diff --git a/aten/src/ATen/native/cuda/layer_norm_kernel.cu b/aten/src/ATen/native/cuda/layer_norm_kernel.cu
index 0d63a2f979c..3ce2c24c18e 100644
--- a/aten/src/ATen/native/cuda/layer_norm_kernel.cu
+++ b/aten/src/ATen/native/cuda/layer_norm_kernel.cu
@@ -657,6 +657,7 @@ bool aligned_grid
>
__global__
void
+__launch_bounds__(block_dim_x * block_dim_y)
GammaBetaBackwardCUDAKernelTemplate(
int64_t M,
int64_t N,
```
I managed to get a Blackwell machine and verified that the fix works. The fix was verified using this repro that I got from @drisspg
<details>
<summary> Repro script that fails on Blackwell </summary>
```
import torch
from torch.nn import init
# from transformer_nuggets import init_logging
# from transformer_nuggets.utils.benchmark import profiler
# from pathlib import Path
# init_logging()
class PermuteModule(torch.nn.Module):
def __init__(self, permutation):
super(PermuteModule, self).__init__()
self.permutation = permutation
def forward(self, x:torch.Tensor) -> torch.Tensor:
assert len(x.shape) == len(self.permutation), f"Dimension mismatch! Unable to permute {len(x.shape)} dim input with a {len(self.permutation)} dim permutation!"
return x.permute(*self.permutation)
def test(n_layers:int, conv_stride:int):
_sequence = []
for _ in range(n_layers):
# Conv1d inputs are (N x C x L), LayerNorm expects (* x C). Dims must be permuted between modules.
_sequence += [
PermuteModule((0,2,1)),
torch.nn.Conv1d(in_channels=512, out_channels=512, groups=1, kernel_size=9, dilation=1, stride=conv_stride, padding=0, bias=False),
PermuteModule((0,2,1)),
torch.nn.LayerNorm(512),
torch.nn.ReLU()
]
model = torch.nn.Sequential(*_sequence).to(device="cuda")
data = torch.randn((100,2048,512), device="cuda")
out = model(data)
loss = torch.nn.functional.mse_loss(out, torch.rand_like(out))
loss.backward()
torch.autograd.set_detect_anomaly(True)
print(f"Torch version: {torch.__version__}")
# with profiler(Path("conv")):
# # print(f"layers=1, stride=1")
# # test(n_layers=1, conv_stride=1)
# # print(f"layers=2, stride=1")
# # test(n_layers=2, conv_stride=1)
# # print(f"layers=1, stride=2")
# # test(n_layers=1, conv_stride=2)
# print(f"layers=2, stride=2")
# test(n_layers=2, conv_stride=2)
print(f"layers=2, stride=2")
test(n_layers=2, conv_stride=2)
# we will not reach this print statement.
print("DONE.")
```
</details>
I also re-ran my performance benchmark and found no regressions over the previous PR.
# Full description of the old PR
Original PR: https://github.com/pytorch/pytorch/pull/148605
This PR adds a new kernel for producing gamma and beta values for the backward pass in a performant way.
To test the performance against the baseline, I measured the backward pass of layernorm while sweeping over the following variables:
1. dtype in {half, float}
2. M in `2**k, 2**k - 1, 2**k + 1 for k in range(...)`
3. N in `2**k, 2**k - 1, 2**k + 1 for k in range(...)`
4. Whether we flush the L2 cache before running the backward pass
Summary: The new code performs better than the old code, especially for powers of 2. For M >> N case, it performs very well (kernel itself can be 30x faster and the overall backward pass can be 5-10x faster).
In order to visualize results of the kernel when choosing different values of M, N and dtype, I wrote some code to generate a heatmap. The heatmap has N on the x-axis, M on the y-axis and color-coded points where green shows performance improvement and red shows regressions. For example, `m=32 n=2048 1.42x` in the heatmap would indicate the normalized shape had 32 elements. The leading dimensions' product was 2048 elements and the new kernel resulted in the *backward pass* being 1.42x faster than the old *backward pass*.
Important note: This heatmap shows the total backward pass time as seen by the user. The kernel time difference can be sometimes very large while the total backward pass time is not that high. For example, for dtype=torch.half, M=32 N=2048, flush_l2_cache=True case, the heatmap shows a speedup of 1.42x, while ncu tells me the new kernel is 2.5x faster than the old:
M=32 N=2048 dtype=half flush_l2=True Old Kernel NCU summary:
```
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 1.59
SM Frequency Ghz 1.35
Elapsed Cycles cycle 27,526
Memory Throughput % 2.21
DRAM Throughput % 0.54
Duration us 20.42
L1/TEX Cache Throughput % 4.31
L2 Cache Throughput % 2.62
SM Active Cycles cycle 1,475.02
Compute (SM) Throughput % 0.29
----------------------- ----------- ------------
```
M=32 N=2048 dtype=half flush_l2=True New Kernel NCU summary:
```
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 1.59
SM Frequency Ghz 1.34
Elapsed Cycles cycle 10,920
Memory Throughput % 5.64
DRAM Throughput % 1.35
Duration us 8.13
L1/TEX Cache Throughput % 1.92
L2 Cache Throughput % 6.89
SM Active Cycles cycle 3,554.41
Compute (SM) Throughput % 0.67
----------------------- ----------- ------------
```
Let's look at some rows from the heatmap. For dtype=float16 flush_l2_cache=True and when input shapes are powers of 2, we get the following:
<img width="1508" alt="image" src="https://github.com/user-attachments/assets/06179599-b2f0-4a45-8664-247a1067950b" />
There are 3 columns -- the first shows all data points, the second shows speedups only and the 3rd column shows regressions only. We can see that there are dramatic speedups for M >> N cases and the regressions are not that high (less than 1%, which could just be measurement noise). Here is a small guide I made:

For dtype=float32, we get a similar chart:
<img width="1499" alt="image" src="https://github.com/user-attachments/assets/c4d31a76-03b0-426c-9114-e1bfad29b530" />
The new code performs especially well for m >> n cases, and also where m and n are small. The m >> n case is special because we run 2 reduction kernels back to back and parallelize in the "M" dimension (the older kernel only parallelized in the "N" dimension).
The new code can sometimes have regressions for non-powers of 2. That is because the old code was using block sizes of {16, 32} while we have `threads.x = 32`. For example when N=33, the old code would have 3 blocks and we will have 2 blocks. I wrote some code to specialize for this case, but I think it will add complexity and @ngimel mentioned that non-powers of 2 are rare enough.
I am including the regressions here for completeness' sake:
<img width="1500" alt="image" src="https://github.com/user-attachments/assets/31c17cfb-ed9b-4106-b9c8-5c359751f530" />
To see this better:
1. Click the image
2. Right click the expanded image and open in a new tab
3. Go to that tab and left click once to zoom in
If you want to see the full data, here it is:

I also measured binary size and compile time since those are important for developers:
Binary size comparison

```
# Original
-rwxr-xr-x 1 ahmads users 307193112 Mar 6 08:46 ./torch/lib/libtorch_cuda.so
# This PR
-rwxr-xr-x 1 ahmads users 307193112 Mar 6 08:46 ./torch/lib/libtorch_cuda.so
```
The diff in bytes is 302kB which is about a 0.1% increase.
Compile time difference:
```
# Original
real 0m10.931s
user 0m9.676s
sys 0m1.004s
# this PR
real 0m16.720s
user 0m15.514s
sys 0m1.066s
# Command I ran
time /usr/local/cuda/bin/nvcc -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DFLASHATTENTION_DISABLE_ALIBI -DFLASHATTENTION_DISABLE_SOFTCAP -DFLASH_NAMESPACE=pytorch_flash -DFMT_HEADER_ONLY=1 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_BUILD_MAIN_LIB -DTORCH_CUDA_USE_NVTX3 -DUNFUSE_FMA -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_CUDA -DUSE_CUFILE -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_FLASH_ATTENTION -DUSE_MEM_EFF_ATTENTION -DUSE_NCCL -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cuda_EXPORTS -I/home/ahmads/personal/pytorch/build/aten/src -I/home/ahmads/personal/pytorch/aten/src -I/home/ahmads/personal/pytorch/build -I/home/ahmads/personal/pytorch -I/home/ahmads/personal/pytorch/cmake/../third_party/benchmark/include -I/home/ahmads/personal/pytorch/third_party/onnx -I/home/ahmads/personal/pytorch/build/third_party/onnx -I/home/ahmads/personal/pytorch/nlohmann -I/home/ahmads/personal/pytorch/third_party/flash-attention/csrc/flash_attn/src -I/home/ahmads/personal/pytorch/aten/src/THC -I/home/ahmads/personal/pytorch/aten/src/ATen/cuda -I/home/ahmads/personal/pytorch/third_party/fmt/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/tools/util/include -I/home/ahmads/personal/pytorch/build/caffe2/aten/src -I/home/ahmads/personal/pytorch/aten/src/ATen/.. -I/home/ahmads/personal/pytorch/build/nccl/include -I/home/ahmads/personal/pytorch/c10/cuda/../.. -I/home/ahmads/personal/pytorch/c10/.. -I/home/ahmads/personal/pytorch/third_party/tensorpipe -I/home/ahmads/personal/pytorch/build/third_party/tensorpipe -I/home/ahmads/personal/pytorch/third_party/tensorpipe/third_party/libnop/include -I/home/ahmads/personal/pytorch/torch/csrc/api -I/home/ahmads/personal/pytorch/torch/csrc/api/include -isystem /home/ahmads/personal/pytorch/build/third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/ahmads/personal/pytorch/third_party/protobuf/src -isystem /home/ahmads/personal/pytorch/third_party/XNNPACK/include -isystem /home/ahmads/personal/pytorch/third_party/ittapi/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/ahmads/personal/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /home/ahmads/personal/pytorch/third_party/ideep/include -isystem /home/ahmads/personal/pytorch/INTERFACE -isystem /home/ahmads/personal/pytorch/third_party/nlohmann/include -isystem /home/ahmads/personal/pytorch/third_party/NVTX/c/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/cudnn_frontend/include -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -D_GLIBCXX_USE_CXX11_ABI=1 -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_90,code=sm_90 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -O3 -DNDEBUG -std=c++17 -Xcompiler=-fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Xcompiler -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-but-set-variable -Wno-maybe-uninitialized -MD -MT caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/layer_norm_kernel.cu.o -MF caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/layer_norm_kernel.cu.o.d -x cu -c /home/ahmads/personal/pytorch/aten/src/ATen/native/cuda/layer_norm_kernel.cu -o caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/layer_norm_kernel.cu.o
```
So the new PR is 6 seconds longer compile time.
| true
|
2,977,281,846
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_int16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_int16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40089822514).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_int16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,977,207,691
|
[CI] Add XPU compiled check in CICD
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 4
|
COLLABORATOR
|
Address the suggestion from https://github.com/pytorch/pytorch/issues/150001#issuecomment-2753407421
| true
|
2,977,042,906
|
[Profiler][HPU] Enable profiler.key_averages().table() for HPU devices
|
wdziurdz
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes #150769
| true
|
2,977,038,493
|
[Profiler][HPU] Assertion failure when calling profiler.key_averages().table() on HPU devices
|
wdziurdz
|
closed
|
[
"triaged",
"intel",
"module: hpu"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
profiler.key_averages().table() should be supported for HPU devices. Currently, calling it results in an assertion failure. Example call stack below:
```python
Traceback (most recent call last):
File "torch_profiler_chrome_tracer.py", line 57, in <module>
print(profiler.key_averages().table())
File "python3.10/site-packages/torch/profiler/profiler.py", line 315, in key_averages
return self.profiler.key_averages(group_by_input_shape, group_by_stack_n)
File "python3.10/site-packages/torch/autograd/profiler.py", line 513, in key_averages
return self._function_events.key_averages(
File "python3.10/site-packages/torch/autograd/profiler_util.py", line 332, in key_averages
stats[get_key(evt, group_by_input_shapes, group_by_stack_n)].add(evt)
File "lib/python3.10/site-packages/torch/autograd/profiler_util.py", line 699, in add
self.self_device_time_total += other.self_device_time_total
File "python3.10/site-packages/torch/autograd/profiler_util.py", line 615, in self_device_time_total
assert self.device_type in [
AssertionError
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.14.0
[pip3] torch==2.6.0
[pip3] torch-debug==2.6.0
[pip3] torch_tb_profiler==0.4.0
[pip3] torchvision==0.21.0
[pip3] triton==3.1.0
[conda] Could not collect
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jeromean @bsochack @sujoysaraswati
| true
|
2,976,761,292
|
[elastic][test] fix race condition in test_barrier_timeout_rank_tracing
|
cdzhan
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
# Root cause
The barrier timeout set to 0.1 is too short, some threads may not have enough time to reach the barrier.
# How to reproduce
Adding some sleep will be easy to reproduce.
```python
def test_barrier_timeout_rank_tracing(self):
N = 3
store = dist.HashStore()
def run_barrier_for_rank(i: int):
if i != 0:
import time;time.sleep(1) # Let some thread sleep for a while
try:
store_util.barrier(
store,
N,
key_prefix="test/store",
barrier_timeout=0.1,
rank=i,
rank_tracing_decoder=lambda x: f"Rank {x} host",
trace_timeout=0.01,
)
except Exception as e:
return str(e)
return ""
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,976,646,684
|
[inductor] Clean typing in codegen/common.py and codecache.py
|
rec
|
open
|
[
"open source",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150767
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,976,612,438
|
Refactor: add initialization of math.lcm into torch_c_binding_in_graph_functions
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150766
As the title stated.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,976,611,813
|
torch.compile failed to handle a custom __delattr__ method correctly
|
XinyiYuan
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 1
|
NONE
|
### 🐛 Describe the bug
torch.compile fails to correctly handle classes with a custom `__delattr__` method. Specifically, when a class overrides `__delattr__` to block deletion of certain attributes, the behavior is not preserved under compilation.
MRE:
```python
import torch
class MyObject:
def __init__(self, val):
self.val = val
def __delattr__(self, attr):
if attr == "val":
print(f"Cannot delete attribute '{attr}'!")
else:
super().__delattr__(attr)
@torch.compile(fullgraph=True, backend="eager")
def test(input_tensor):
instance_a = MyObject(1)
instance_b = MyObject(2)
del instance_a.val
del instance_b.val
exists_a = hasattr(instance_a, 'val')
exists_b = hasattr(instance_b, 'val')
return input_tensor + 1, exists_a, exists_b
# Expected output: (tensor([2.]), True, True) since 'val' deletion is prevented
# Actual output: (tensor([2.]), False, False)
print(process(torch.ones(1)))
```
Also, if we dont use `@torch.compile`, this error does not appear. This suggests that the cumtom `__delattr__` is bypassed or not respected during graph tracing or ahead-of-time compilation.
### Error logs
Terminal output:
```
(tensor([2.]), False, False)
```
And `Cannot delete attribute '{attr}'!` is not printed.
### Versions
python 3.10.14
pytorch 2.4.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
| true
|
2,976,581,829
|
Don't run NCCL/gloo distributed test without GPUs
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
If there aren't any GPUs the WORLD_SIZE would be zero which does not work.
So skip those backends completely in that case.
Fix after https://github.com/pytorch/pytorch/pull/137161
It might make sense to still run the (CPU-) part of the tests by using something like `world_size = max(3, gpu_count)` or `num_gpus if num_gpus else 3` instead of skipping them all
| true
|
2,976,573,934
|
[Dynamo][Typing] Enable `@override` for VTs [1/N]
|
shink
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo"
] | 8
|
CONTRIBUTOR
|
As https://github.com/pytorch/pytorch/pull/150289#pullrequestreview-2729254192 said.
Enable `@override` for VTs:
- torch/_dynamo/variables/base.py
- torch/_dynamo/variables/builtin.py
- torch/_dynamo/variables/constant.py
- torch/_dynamo/variables/ctx_manager.py
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,976,229,665
|
[Inductor] Set the default value of min_chunk_size to 512
|
jiayisunx
|
open
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150762
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,976,136,900
|
[Easy] enable PYFMT for torch/quantization/eager
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150761
All modifications are done through tools, the detailed commands are as follows:
```bash
lintrunner -a --take "PYFMT" --all-files
```
| true
|
2,976,136,572
|
Add more check for torch.ormqr
|
FFFrog
|
closed
|
[
"release notes: linalg_frontend"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150761
* __->__ #150760
As the title statd.
Please refer to https://github.com/pytorch/pytorch/issues/150674 for more info.
| true
|
2,976,113,250
|
Add more check for torch.ormqr
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150759
As the title statd.
Please refer to https://github.com/pytorch/pytorch/issues/150674 for more info.
| true
|
2,976,104,477
|
[Don't Merge] Check Regression
|
shiyang-weng
|
closed
|
[
"open source",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
There are regressions running ci for https://github.com/pytorch/pytorch/pull/150150
But this patch not related to the regressions.
This pr only used to check if there are regressions on master branch
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,976,080,707
|
Inductor `Fatal Python error` via reduction of `None` refcount to 0
|
main-horse
|
closed
|
[
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
# TLDR
1. inductor torch.compile()'d training with torch nightly can produce `Fatal Python error: none_dealloc: deallocating None` after an indeterminate number of steps.
2. This is because some aspect of compiled autograd wrongly reduces the refcount of `None` to 0, which [triggers `Py_XDECREF(None)`](https://github.com/python/cpython/issues/115618)
3. The above does not occur when models are not compiled.
See evidence/repro in related [torchtitan issue](https://github.com/pytorch/torchtitan/issues/1066)
#### Note
I have not confirmed the existence of this issue outside of a single DGX H100 node. It is plausible this issue is derived from elsewhere (e.g. bugged python binary distribution), but I cannot tell.
I believe this issue is unlikely to be caught by tests in general, because the refcount of None is really high after typical trainer init. `sys.getrefcount(None)` starts at ~2e5 on torchtitan's first train step.
### Error logs
See evidence/repro in related [torchtitan issue](https://github.com/pytorch/torchtitan/issues/1066)
### Versions
```bash
$ python3 collect_env.py
Collecting environment information...
PyTorch version: 2.8.0.dev20250406+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-133-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.127.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 256 MiB (64 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] lovely-numpy==0.2.13
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.3.0+git96316ce5
[pip3] torch==2.8.0.dev20250406+cu126
[pip3] torchdata==0.11.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @chauhang @penguinwu
| true
|
2,975,884,106
|
FP8: E4M3fn: The FP8 E4M3fn result is not inf when casting a bfloat16 value larger than max normal value of FP8 E4M3 (448). It gets rounded down to 448.
|
varun10221
|
closed
|
[
"triaged",
"module: float8"
] | 2
|
NONE
|
### 🐛 Describe the bug
import torch
vals = torch.tensor([464],dtype=torch.bfloat16)
a_f8 = vals.to(torch.float8_e4m3fn)
print(a_f8)
b_bf16 = a_f8.to(torch.bfloat16)
print(b_bf16)
print(torch.finfo(torch.float8_e4m3fn).max)
#This happens for all values from 449 ->465 , it updates to inf for values greater than that.
### Versions
PyTorch version: 2.6.0
cc @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
2,975,813,845
|
[ez] move GuardsContext code comment to the right place
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151180
* #151179
* #150828
* __->__ #150755
* #150754
* #150753
| true
|
2,975,813,754
|
[ez]][dynamo] remove useless super().__init__()
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151180
* #151179
* #150828
* #150755
* __->__ #150754
* #150753
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,975,813,639
|
[ez][dynamo] some code movement
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151180
* #151179
* #150828
* #150755
* #150754
* __->__ #150753
`optimize_assert` already does the lookup for `backend` and
`backend_ctx_ctor`. This simply moves the lookups within `optimize`
lower so we don't end up calling these functions twice unnecessarily
in the `optimize_assert` path.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,975,797,813
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_float64 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_float64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40072204429).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,975,654,630
|
[Quant][PT2E][X86] enable qconv1d-relu fusion
|
Xia-Weiwen
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"intel",
"module: inductor",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150831
* __->__ #150751
**Summary**
As the title.
- The `conv1d - relu` pattern will be annotated by the `X86InductorQuantizer`.
- The pattern will be fused as `qconv_pointwise` during lowering.
**Test plan**
```
python test/inductor/test_mkldnn_pattern_matcher.py -k test_qconv1d_relu_cpu
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,975,545,825
|
Make device check error message more descriptive
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 21
|
CONTRIBUTOR
|
Fixes #122757
## Test Result
```python
import torch
model_output = torch.randn(10, 5).cuda()
labels = torch.randint(0, 5, (10,)).cuda()
weights = torch.randn(5)
loss_fn = torch.nn.CrossEntropyLoss(weight=weights)
loss = loss_fn(input=model_output, target=labels)
print(loss)
Traceback (most recent call last):
File "/home/zong/code/pytorch/../loss2.py", line 17, in <module>
loss = loss_fn(input=model_output, target=labels)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zong/code/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zong/code/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zong/code/pytorch/torch/nn/modules/loss.py", line 1297, in forward
return F.cross_entropy(
^^^^^^^^^^^^^^^^
File "/home/zong/code/pytorch/torch/nn/functional.py", line 3494, in cross_entropy
return torch._C._nn.cross_entropy_loss(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but got weight is on cpu, different from other tensors on cuda:0 (when checking argument in method wrapper_CUDA_nll_loss_forward)
```
| true
|
2,975,472,077
|
Add `torch.triu_indices`, `torch.tril_indices` dtype description
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 12
|
CONTRIBUTOR
|
Fixes #150675
## Test Result

| true
|
2,975,374,040
|
[DCP][OSS] Introduce barrier util in the DistWrapper for rank local checkpointing
|
saumishr
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)"
] | 4
|
CONTRIBUTOR
|
Summary: Introduce barrier util in the DistWrapper for rank local checkpointing. This barrier will be used at the end of the rank local checkpointing to ensure all ranks synchronize.
Test Plan: UTs
Differential Revision: D72541431
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,975,370,413
|
DISABLED test_parity__foreach_abs_fastpath_outplace_cuda_float32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 4
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_outplace_cuda_float32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/40065343696).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_outplace_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,975,248,069
|
Export QAT model is not performing as expected when compared to the original model and FX Graph QAT
|
Jacobdelgado1002
|
closed
|
[
"needs reproduction",
"oncall: quantization",
"oncall: pt2",
"oncall: export"
] | 5
|
NONE
|
### 🐛 Describe the bug
I'm trying to perform QAT utilizing MobileNetV2 with the goal of converting it into TFLite. However, after training the model, I run a bench-marking script to compare its performance to the original model and see that the performance deprecates greatly.
Here are the important code snippets:
```
from torchvision import models
from torch.ao.quantization.quantize_pt2e import prepare_qat_pt2e
from torch.ao.quantization.quantizer.xnnpack_quantizer import XNNPACKQuantizer, get_symmetric_quantization_config
model = models.mobilenet_v2(weights='DEFAULT')
example_inputs = (next(iter(dataloader))[0].to(device),)
model = torch.export.export_for_training(model, example_inputs).module()
quantizer = XNNPACKQuantizer().set_global(get_symmetric_quantization_config(is_qat=True))
model = prepare_qat_pt2e(model, quantizer)
train_model(model)
```
I only included what I thought was relevant since I didn't want to add confusion with all of my helper functions
```
def train_model(model):
for phase in ['train', 'val']:
is_train = phase == 'train'
if is_train:
torch.ao.quantization.move_exported_model_to_train(model)
else:
# Switch to evaluation mode to perform inference
torch.ao.quantization.move_exported_model_to_eval(model)
data_loader = train_loader if is_train else val_loader
running_loss = 0.0
total_samples = 0.0
predictions, ground_truths, probabilities = [], [], []
with tqdm(total=len(data_loader), desc=f"{phase.capitalize()} Epoch {epoch + 1}/{epochs}") as pbar:
for inputs, labels in data_loader:
inputs, labels = inputs.to(device), labels.to(device)
# Zero gradients only during training
if is_train:
optimizer.zero_grad()
# Enable gradients only in training phase
with torch.set_grad_enabled(is_train):
model = model.to(device)
model_logits = model(inputs)
soft_loss = compute_distillation_loss(model_logits)
label_loss, probs, preds = compute_loss_and_predictions(model_logits, labels, criterion)
# Compute weighted combination of the distillation and cross entropy losses
loss = soft_target_loss_weight * soft_loss + ce_loss_weight * label_loss
# Backward pass and optimizer step in training phase
if is_train:
loss.backward()
optimizer.step()
# Update progress bar with average loss so far
pbar.set_postfix(loss=f"{running_loss / total_samples:.4f}")
pbar.update(1)
```
### Actual vs expected behavior:
I would expect that the quantized model has better performance than the original model but it does not.
| | Original | QAT |
|--------|--------|--------|
| Model Size (MB) | 9.1899 | 11.1504 |
| Inference Time (sec/sample) | 0.002896 | 0.011141 |
| Throughput (samples/sec) | 345.29 | 89.76 |
| Energy per Sample (Joules) | 0.3436 | 1.350853 |
| Throughput per Watt (samples/sec/W) | 2.91 | 0.74 |
This is even stranger since if I switch to FX Graph QAT, I get the expected behavior. However, I need to use Export quantization since I want to use the ai-edge-torch API to convert my model to TFLite.
| | Original | QAT |
|--------|--------|--------|
| Model Size (MB) | 9.1899 | 2.3465 |
| Inference Time (sec/sample) | 0.002896 | 000250 |
| Throughput (samples/sec) | 345.29 | 4003.28 |
| Energy per Sample (Joules) | 0.3436 | 0.0271 |
| Throughput per Watt (samples/sec/W) | 2.91 | 36.85 |
Additionally, when I print the resulting QAT model I get the following:
```
GraphModule(
(features): Module(
(0): Module(
(1): Module()
)
(1): Module(
(conv): Module(
(0): Module(
(1): Module()
)
(2): Module()
)
)
(2): Module(
(conv): Module(
(0): Module(
(1): Module()
)
(1): Module(
(1): Module()
)
(3): Module()
)
)
(3): Module(
...
```
I would think that it would be more similar to the resulting QAT model from FX Graph quantization which leads me to believe that it is not training correctly. The FX Graph is added below:
```
GraphModule(
(features): Module(
(0): Module(
(0): QuantizedConv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), scale=0.22475136816501617, zero_point=113, padding=(1, 1))
(2): ReLU6(inplace=True)
)
(1): Module(
(conv): Module(
(0): Module(
(0): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=0.36381739377975464, zero_point=112, padding=(1, 1), groups=32)
(2): ReLU6(inplace=True)
)
(1): QuantizedConv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), scale=0.5194709300994873, zero_point=139)
)
)
...
```
### Versions
My system has a `AMD Ryzen™ Threadripper™ 7960Xs × 48 `and a NVIDIA `GeForce RTX 4090`
Here is my virtual env:
<pre>absl-py==2.2.1
ai-edge-litert==1.2.0
ai-edge-quantizer==0.1.0
ai-edge-torch==0.4.0
anyio==4.8.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==2.4.1
astunparse==1.6.3
async-lru==2.0.4
attrs==25.3.0
babel==2.17.0
beautifulsoup4==4.13.3
bleach==6.2.0
certifi==2024.12.14
cffi==1.17.1
charset-normalizer==3.4.1
coloredlogs==15.0.1
comm==0.2.2
contourpy==1.3.1
cycler==0.12.1
debugpy==1.8.6
decorator==5.1.1
defusedxml==0.7.1
execnet==2.1.1
executing==2.1.0
executorch==0.5.0
expecttest==0.3.0
fastjsonschema==2.21.1
filelock==3.17.0
flatbuffers==25.2.10
fonttools==4.55.8
fqdn==1.5.1
fsspec==2024.12.0
gast==0.6.0
google-pasta==0.2.0
grpcio==1.71.0
h11==0.14.0
h5py==3.13.0
httpcore==1.0.7
httpx==0.28.1
humanfriendly==10.0
hypothesis==6.130.8
idna==3.10
immutabledict==4.2.1
iniconfig==2.1.0
ipykernel==6.29.5
ipython==8.28.0
ipywidgets==8.1.5
isoduration==20.11.0
jax==0.5.3
jaxlib==0.5.3
jedi==0.19.1
Jinja2==3.1.5
joblib==1.4.2
json5==0.10.0
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
jupyter==1.1.1
jupyter-console==6.6.3
jupyter-events==0.12.0
jupyter-lsp==2.2.5
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyter_server==2.15.0
jupyter_server_terminals==0.5.3
jupyterlab==4.3.5
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.3
jupyterlab_widgets==3.0.13
kaggle==1.6.17
keras==3.9.1
kiwisolver==1.4.8
libclang==18.1.1
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.10.0
matplotlib-inline==0.1.7
mdurl==0.1.2
mistune==3.1.2
ml_dtypes==0.5.1
mpmath==1.3.0
namex==0.0.8
nbclient==0.10.2
nbconvert==7.16.6
nbformat==5.10.4
nest-asyncio==1.6.0
networkx==3.4.2
notebook==7.3.2
notebook_shim==0.2.4
numpy==2.0.0
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-cusparselt-cu12==0.6.2
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
onnx==1.16.1
onnx-graphsurgeon==0.5.7
onnx-tf==1.6.0
onnx2tf==1.27.1
onnxruntime==1.21.0
onnxscript==0.2.3
opt_einsum==3.4.0
optree==0.14.1
overrides==7.7.0
packaging==24.2
pandas==2.2.2
pandocfilters==1.5.1
parameterized==0.9.0
parso==0.8.4
pexpect==4.9.0
pillow==11.1.0
platformdirs==4.3.6
pluggy==1.5.0
prometheus_client==0.21.1
prompt_toolkit==3.0.48
protobuf==3.20.3
psutil==6.0.0
ptyprocess==0.7.0
pure_eval==0.2.3
pycparser==2.22
Pygments==2.19.1
pyparsing==3.2.1
pyRAPL==0.2.3.1
pytest==8.3.5
pytest-xdist==3.6.1
python-dateutil==2.9.0.post0
python-json-logger==3.3.0
python-slugify==8.0.4
pytz==2024.2
PyYAML==6.0.2
pyzmq==26.2.0
referencing==0.36.2
requests==2.32.3
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.9.4
rpds-py==0.23.1
ruamel.yaml==0.18.10
ruamel.yaml.clib==0.2.12
safetensors==0.5.3
scikit-learn==1.6.1
scipy==1.15.1
seaborn==0.13.2
Send2Trash==1.8.3
setuptools==75.8.0
six==1.17.0
sng4onnx==1.0.4
sniffio==1.3.1
sortedcontainers==2.4.0
soupsieve==2.6
stack-data==0.6.3
sympy==1.13.1
tabulate==0.9.0
tensorboard==2.19.0
tensorboard-data-server==0.7.2
tensorflow==2.19.0
termcolor==2.5.0
terminado==0.18.1
text-unidecode==1.3
tf2onnx==1.16.1
tf_keras==2.19.0
tflite==2.18.0
threadpoolctl==3.5.0
tinycss2==1.4.0
torch==2.6.0
torch_xla2==0.0.1.dev202412041639
torchaudio==2.6.0
torchsummary==1.5.1
torchvision==0.21.0
tornado==6.4.1
tqdm==4.67.1
traitlets==5.14.3
triton==3.2.0
types-python-dateutil==2.9.0.20241206
typing_extensions==4.12.2
tzdata==2025.1
uri-template==1.3.0
urllib3==2.3.0
wcwidth==0.2.13
webcolors==24.11.1
webencodings==0.5.1
websocket-client==1.8.0
Werkzeug==3.1.3
wheel==0.45.1
widgetsnbextension==4.0.13
wrapt==1.17.2
</pre>
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,975,199,483
|
Cannot checkout commits from when NCCL was still a submodule
|
danielvegamyhre
|
closed
|
[
"module: build",
"module: ci",
"triaged",
"module: nccl"
] | 5
|
CONTRIBUTOR
|
is there a way i can checkout the commit from before NCCL was updated here: https://github.com/pytorch/pytorch/commit/4ece056791d779a6bfb0574c3a26cd6a7e600089
When I try I can an error:
```
fatal: not a git repository: ../../../.git/modules/third_party/nccl/nccl
fatal: could not reset submodule index
```
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,975,145,499
|
[codemod] Fix `-Wambiguous-reversed-operator` in aten/src/ATen/cuda/tunable/Tunable.h
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary:
`-Wambiguous-reversed-operator` warns about ambiguous reversed operators, e.g. `a < b` and `b > a` are both valid. Such operators are disallowed in C++20. This codemod fixes the warnings.
#buildsonlynotests - If this diff compiles, it works.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
Differential Revision: D72535527
| true
|
2,975,094,347
|
a
|
jlcmoore
|
closed
|
[] | 0
|
NONE
| null | true
|
2,975,004,395
|
Install pytorch from pypi using local CUDA build
|
ikrommyd
|
open
|
[
"module: binaries",
"oncall: releng",
"module: ci",
"triaged",
"enhancement",
"has workaround",
"needs design"
] | 5
|
NONE
|
### 🚀 The feature, motivation and pitch
It's great that nvidia provides wheels for the CUDA related packages and we don't need `conda/mamba` to install pytorch anymore, but those packages take up space if you install pytorch in multiple environments.
I would be nice if you could install a pytorch version from pypi that could grab and use your local cuda build.
For example, `cupy` provides `pip install cupy-cuda12x`. `jax` provides `pip install "jax[cuda12_local]"` and as far as I'm aware, `pip install tensorflow` also appears to use the GPU even if I don't specify `pip install "tensorflow[and-cuda]"` which could install the nvidia/cuda wheels as well.
Please close if this is just not possible in pytorch's case or a duplicate (I didn't see it if it's there).
### Alternatives
Just have the available space and install the nvidia wheels on every environment separately.
### Additional context
_No response_
cc @seemethere @malfet @osalpekar @atalman @pytorch/pytorch-dev-infra
| true
|
2,974,992,843
|
how to install pytorch with cuda 12.2 and py3.12
|
goactiongo
|
closed
|
[] | 4
|
NONE
|
### 🐛 Describe the bug
I wanna know how to install pytorch with CUDA12.2
### Versions
I used the following command , and many issue occured
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
| true
|
2,974,540,925
|
[DTensor] Add DTensor redistribute fwd/bwd datatype conversion to enable SimpleFSDP mixed precision training
|
ruisizhang123
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 7
|
CONTRIBUTOR
|
As titled, this pr adds additional `forward_dtype` and `backward_dtype` conversion in DTensor `redistribute` API to enable SimpleFSDP's mixed precision training.
In this forward pass, the DTensor can be configured to be cast to `forward_dtype`; in the backward pass, the DTensor can be configured to be cast to `backward_dtype`.
1. **Correctness**: The end-to-end SimpleFSDP mixed precision training integration has been proved to work properly in the PR from this fork: https://github.com/tianyu-l/pytorch_intern24/pull/20. We are now migrating the code to official PyTorch DTensor.
2. **Example Usage**: There is an example in TorchTian's SimpleFSDP implementation: https://github.com/pytorch/torchtitan/pull/1060.
In the example below, a DTensor `x` is all-gather'ed along the `self.compute_placements`, with datatype cast to `self.param_dtype`. In the backward pass, additionally, the computed gradients are reduce-scatter'ed along the `self.grad_placements`, with datatype cast to `self.reduce_dtype`.
```python
output = x.redistribute(
placements=self.compute_placements,
forward_dtype=self.param_dtype,
backward_dtype=self.reduce_dtype,
).to_local(grad_placements=self.grad_placements)
```
Under the hood, in `class Redistribute(torch.autograd.Function):`, the `forward` function first takes `x`'s local tensor, convert it to `forward_dtype`, before all-gather `x`.
The `backward` function take `grad_output` and convert it to `backward_dtype`, before reduce-scatter `grad_output`.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tianyu-l
| true
|
2,974,536,794
|
[AOTI] Embed cubin files into .so
|
desertfire
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #150739
Summary: Embed cubin files so AOTI is one step closer to generate a single binary. Controlled by a flag and off as default.
Differential Revision: [D72535357](https://our.internmc.facebook.com/intern/diff/D72535357)
| true
|
2,974,521,493
|
[CI] [Inductor] Add MPS to HAS_GPU variable
|
malfet
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150821
* __->__ #150738
* #150824
But exclude it from torch/testing/_internal/triton_utils.py (i.e. later implies `HAS_GPU` and has triton)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,974,520,904
|
[MPSInductor] Fix tiled reduction logic
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150738
* __->__ #150737
In case of tiles, index must include both reduction dimentions
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,974,440,544
|
Fix missing braces for clang CUDA
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: sparse"
] | 4
|
CONTRIBUTOR
|
Test Plan: Sandcastle
Differential Revision: D72469764
| true
|
2,974,439,792
|
Suppress `-Wunused-function` for DSA
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Test Plan: Sandcastle
Reviewed By: dtolnay
Differential Revision: D72458590
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.