id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,021,264,655
|
Fix broken URLs
|
shoumikhin
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"module: cpu",
"module: mkldnn",
"Merged",
"NNC",
"ciflow/trunk",
"release notes: quantization",
"release notes: releng",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 3
|
CONTRIBUTOR
|
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mingfeima @XiaobingSuper @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,021,262,427
|
At least one of ROCM_HOME or CUDA_HOME must be None
|
jithunnair-amd
|
open
|
[
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
Copied description by @hj-wei from
https://github.com/ROCm/pytorch/pull/1809
> Hi all, I manually generating nvcc to bypass NVIDIA component
checks(Megatron-LM),
see
https://github.com/NVIDIA/Megatron-LM/blob/2da43ef4c1b9e76f03b7567360cf7390e877f1b6/megatron/legacy/fused_kernels/__init__.py#L57
> but it can lead to incorrect CUDA_HOME configurations. This can cause
initialization anomalies in downstream libraries like DeepSpeed
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,021,246,238
|
[CUDA][SDPA] bump fudge factor in `test_sdpa` in `test_nestedtensor`
|
eqy
|
closed
|
[
"module: cuda",
"open source",
"module: nestedtensor",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: sdpa"
] | 6
|
COLLABORATOR
|
Small mismatches on e.g., 4090, A6000/A40
cc @ptrblck @msaroufim @jerryzh168 @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
3,021,244,180
|
_get_total_norm should use float64 to avoid rounding errors
|
RishabhSaini
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 7
|
NONE
|
When a _NormPartial is Reduced, rounding errors can cause the resulting Tensor to have inconsistent results.
Example:
```
>>> import torch
>>> print(f"{(torch.linalg.vector_norm(torch.tensor([1.0, 1.0]))**2.0 + torch.linalg.vector_norm(torch.tensor([2.0, 2.0]))**2.0)**(1/2):.10f}")
3.1622774601
>>> print(f"{torch.linalg.vector_norm(torch.tensor([[1.0, 1.0], [2.0, 2.0]])):.10f}")
3.1622776985
```
Since the `_clip_grads_with_norm_` does: `clip_coef = max_norm / (total_norm + 1e-6)`, the results have larger rounding errors
Closes: #149768
See alternative implementation details here waiting for approval:
https://github.com/pytorch/pytorch/issues/149768#issuecomment-2831707319
Edit:
Working on modifying the existing VectorNorm Strategy to avoid rounding errors
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
3,021,232,169
|
ReducedPrecisionFloatGemvFastPathKernel: Correctly type parallel_for lambda arguments as int64_t
|
swolchok
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152233
* #152232
This plus the previous irangeification PR seem like a better fix for #150637 than #150949 to me -- should make sure we are using 64-bit math for indexing everywhere.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,021,232,122
|
irangeify ReducedPrecisionFloatGemvKernel.cpp
|
swolchok
|
closed
|
[
"module: cpu",
"Merged",
"release notes: linalg_frontend"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152233
* __->__ #152232
We should be using irange, especially because we had 32-bit overflow issues in this file recently.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,021,212,542
|
Fix: Consider input defined unbacked during inductor codegen for runtime asserts
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152231
So when we use mark_unbacked the graph will have an unbacked inputs symInt. Right now,
deferred runtime assertions that uses those is never generated.
This PR changes that, such that in the forward graph we consider those and generate the corresponding
runtime assertions of them. We still ignore them for backward which is not ideal
The way we generate runtime assertion is by emitting them when all the defined unbacked symbols used
in them are seen.
We previously skipped placeholder, because for backward we have a wacky approach were we
ignore input defined unbacked symbols and assumes assertions that uses them are already emitted
in forward and we try to emit all other runtime assertions again. see [Note [Backwards runtime asserts]
Doing that we ends up only emitting the runtime assertions that depends on things defined solely in backward, but we could miss checks that spans inputs defined in both backward and forward, i.e one symbol defined in forward passed as input to backward., and another that is defined in backward.) .This is not ideal an ideal approach could be something like this https://github.com/pytorch/pytorch/pull/151919 but it require more work .
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,021,188,876
|
[MPS/inductor] Adjust test_to_dtype_mps so that it works on the backend.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
MEMBER
|
float64 isnt' supported for MPS, but we can still test the functionality with another type.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,021,177,394
|
[BE] Migrate dtype_abbrs into one location
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"release notes: python_frontend",
"topic: bug fixes",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Namely `torch.utils._dtype_abbrs.dtype_abbrs`
Before that it was defined in various forms of completeness in
https://github.com/pytorch/pytorch/blob/c02edba86388d1f86a78bce99d16c5405b54086e/torch/fx/graph.py#L215,
https://github.com/pytorch/pytorch/blob/c02edba86388d1f86a78bce99d16c5405b54086e/torch/testing/_internal/common_utils.py#L5226
and https://github.com/pytorch/pytorch/blob/c02edba86388d1f86a78bce99d16c5405b54086e/torch/testing/_internal/logging_tensor.py#L17
TODO:
- Add linter that `torch.testing._internal` module is not referenced from any of the public facing APIs, as it can have extra dependencies such as `expect_test`
Fixes https://github.com/pytorch/pytorch/issues/152225
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,021,166,911
|
Add `padding="same"` for transposed convolution
|
Alvaro-Kothe
|
open
|
[
"module: cpu",
"triaged",
"open source",
"release notes: nn"
] | 6
|
CONTRIBUTOR
|
This pull requests makes `ConvTranspose*d` and `conv_transpose*d` compatible with the argument `padding="same"`.
I tried to follow the current implementation of the `Conv*d` layer.
Closes #80301, Closes #3867
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,021,143,862
|
[inductor][tests] don't test for cpu if you want to use triton backend
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152227
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,021,141,629
|
NotImplementedError: Could not run 'aten::index.Tensor' with arguments from the 'SparseCUDA' backend.
|
ringohoffman
|
open
|
[
"module: sparse",
"triaged"
] | 3
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
I want to make vectorized selections on a sparse tensor, but it isn't implemented for the `SparseCUDA` backend.
```python
import torch
device = torch.device("cuda:0")
indices = torch.tensor(
[
[0, 1, 2, 3],
[1, 2, 3, 4],
[2, 3, 4, 5]
],
device=device,
)
values = torch.tensor(
[10.0, 20.0, 30.0, 40.0],
device=device,
)
size = (4, 5, 6)
sparse_tensor = torch.sparse_coo_tensor(indices, values, size)
indices = torch.tensor([
[0, 1],
[1, 2],
[2, 3],
], device=device)
dense_tensor = sparse_tensor.to_dense()
result = dense_tensor[tuple(indices)]
# tensor([10., 20.], device='cuda:0')
sparse_result = sparse_tensor[indices]
# NotImplementedError: Could not run 'aten::index.Tensor' with arguments from the 'SparseCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::index.Tensor' is only available for these backends: [CPU, CUDA, HIP, MPS, IPU, XPU, HPU, VE, MTIA, PrivateUse1, PrivateUse2, PrivateUse3, Meta, FPGA, MAIA, Vulkan, Metal, QuantizedCPU, QuantizedCUDA, QuantizedHIP, QuantizedMPS, QuantizedIPU, QuantizedXPU, QuantizedHPU, QuantizedVE, QuantizedMTIA, QuantizedPrivateUse1, QuantizedPrivateUse2, QuantizedPrivateUse3, QuantizedMeta, CustomRNGKeyId, MkldnnCPU, SparseCsrCPU, SparseCsrCUDA, SparseCsrHIP, SparseCsrMPS, SparseCsrIPU, SparseCsrXPU, SparseCsrHPU, SparseCsrVE, SparseCsrMTIA, SparseCsrPrivateUse1, SparseCsrPrivateUse2, SparseCsrPrivateUse3, SparseCsrMeta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
```
```console
$ python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-136-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
```
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
3,021,132,371
|
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: ModuleNotFoundError: No module named 'expecttest'
|
jjh42
|
closed
|
[
"module: regression",
"better-engineering",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Unfortunately I don't have a clean reproduction but I'm hoping perhaps it might be obvious to someone the underlying change.
I upgrade pytorch nightlies (from 2.8.0.dev20250422+cu128 to 20250428) for some other bugfix reasons.
In pytorch.compile (I haven't managed to make a nice isolated case) I get the following exception.
``` File "/root/ml-playground/elefant/lapo/stage3_labelled_bc.py", line 283, in training_step
loss = self._calculate_loss(batch)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 675, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1557, in _call_user_compiler
raise BackendCompilerFailed(
self.compiler_fn, e, inspect.currentframe()
).with_traceback(e.__traceback__) from None
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1532, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/__init__.py", line 2365, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2274, in compile_fx
return aot_autograd(
~~~~~~~~~~~~~
...<8 lines>...
ignore_shape_env=ignore_shape_env,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
)(model_, example_inputs_)
~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 106, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1171, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
dispatch_and_compile,
...<6 lines>...
remote,
)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 870, in load
compiled_fn = dispatch_and_compile()
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1156, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
functional_call,
^^^^^^^^^^^^^^^^
...<3 lines>...
shape_env,
^^^^^^^^^^
)
^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 576, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
flat_fn, fake_flat_args, aot_config, fake_mode, shape_env
)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 826, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
~~~~~~~~~~~^
flat_fn,
^^^^^^^^
...<2 lines>...
fw_metadata=fw_metadata,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 816, in aot_dispatch_autograd
fx_g, joint_inputs, maybe_subclass_meta = aot_dispatch_autograd_graph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~^
flat_fn, flat_args, aot_config, fw_metadata=fw_metadata
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 318, in aot_dispatch_autograd_graph
fx_g = _create_graph(joint_fn_to_trace, updated_joint_inputs, aot_config=aot_config)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 55, in _create_graph
fx_g = make_fx(
...<3 lines>...
pre_dispatch=aot_config.pre_dispatch,
)(*args)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2288, in wrapped
return make_fx_tracer.trace(f, *args)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2226, in trace
return self._trace_inner(f, *args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 2197, in _trace_inner
t = dispatch_trace(
wrap_key(func, args, self.fx_tracer, self.pre_dispatch),
tracer=self.fx_tracer,
concrete_args=tuple(phs),
)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 856, in _fn
return fn(*args, **kwargs)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1221, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 856, in _fn
return fn(*args, **kwargs)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/fx/_symbolic_trace.py", line 837, in trace
(self.create_arg(fn(*args)),),
~~^^^^^^^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/fx/_symbolic_trace.py", line 691, in flatten_fn
tree_out = root_fn(*tree_args)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1276, in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 717, in inner_fn
outs = fn(*args)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 668, in joint_helper
return _functionalized_f_helper(primals, tangents)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 416, in _functionalized_f_helper
f_outs = fn(*f_args)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 283, in inner_fn_with_anomaly
return inner_fn(*args)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 268, in inner_fn
backward_out = torch.autograd.grad(
needed_outs,
...<2 lines>...
allow_unused=True,
)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/autograd/__init__.py", line 452, in grad
return handle_torch_function(
grad,
...<9 lines>...
materialize_grads=materialize_grads,
)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/overrides.py", line 1725, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/fx/experimental/proxy_tensor.py", line 1324, in __torch_function__
return func(*args, **kwargs)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
outputs,
...<5 lines>...
accumulate_grad=False,
)
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
t_outputs, *args, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^
) # Calls into the C++ engine to run the backward pass
^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/autograd/graph.py", line 802, in prehook
grad_outputs_str = f"[{','.join(fmt(t) for t in grad_outputs)}]"
~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/autograd/graph.py", line 802, in <genexpr>
grad_outputs_str = f"[{','.join(fmt(t) for t in grad_outputs)}]"
~~~^^^
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/autograd/graph.py", line 794, in fmt
from torch.testing._internal.common_utils import dtype_abbrs
File "/tmp/elefant-uv-env/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py", line 59, in <module>
import expecttest
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
ModuleNotFoundError: No module named 'expecttest'
```
as a workaround I've installed `expecttest` and it seems to resolve it, by I assume this should not ordinarily be required.
A quick look at `common_utils.py` and `graph.py` didn't show any recent changes that might cause this.
### Versions
Linux, python 3.13.2, pytorch nightly 20250428
cc @chauhang @penguinwu
| true
|
3,021,106,186
|
add xfail for distributed tests on Jetson
|
Fuzzkatt
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
We are hitting distributed import failures on Jetson in test/export/test_export.py tests in NVIDIA internal testing with the recent additions of https://github.com/pytorch/pytorch/pull/146050 and https://github.com/pytorch/pytorch/pull/147417. Instead of simply skipping these tests for Jetson, we are introducing an xfailIfDistributedNotSupported to get better signaling for this kind of failure in the long run.
cc @eqy, @nWEIdia
| true
|
3,021,092,983
|
Compute Capability Misrecognition on NVIDIA Force RTX 50Ge70 Ti (Blackwell Architecture)
|
kaworukevin
|
closed
|
[
"module: cuda",
"module: third_party"
] | 2
|
NONE
|
### 🐛 Describe the bug
Dear PyTorch Team,
I am encountering an issue with PyTorch where my GPU, an NVIDIA GeForce RTX 5070 Ti (Blackwell architecture, expected Compute Capability sm_90), is being misidentified as sm_120. This is causing compatibility issues when running applications like FramePack, resulting in the error "CUDA error: no kernel image is available for execution on the device." I am reaching out to seek your assistance in resolving this issue.
System Details:
Operating System: Windows 11
GPU: NVIDIA GeForce RTX 5070 Ti (Blackwell architecture, should be sm_90)
NVIDIA Driver Version: 576.02 (confirmed to be the latest as of April 26, 2025)
PyTorch Version: 2.7.0+cu128
CUDA Toolkit Version: 12.8 (driver supports CUDA 12.9)
Python Version: 3.10
Issue Description:When I run the following command to check the Compute Capability:
python -c "import torch; print(torch.__version__); print(torch.cuda.get_device_capability())"
The output is:
2.7.0+cu128
(12, 0)
However, the expected Compute Capability for the Blackwell architecture (RTX 5070 Ti) should be sm_90 (Compute Capability 9.0). PyTorch warns that sm_120 is not compatible with the current installation, which supports sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. This misrecognition leads to CUDA errors when running GPU-based applications.
Steps Taken:
Updated PyTorch to the latest version (2.7.0+cu128) using:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
Updated my NVIDIA driver to the latest version (576.02).
Attempted to recompile PyTorch from source with TORCH_CUDA_ARCH_LIST=9.0, but the issue persists.
Request for Assistance:Could you please help me understand why PyTorch is misidentifying my GPU as sm_120 instead of sm_90? Are there any known issues with Blackwell architecture support in PyTorch 2.7.0, or could this be related to driver compatibility? I would greatly appreciate any guidance on how to resolve this issue, whether through a configuration change, a patch, or further debugging steps.
You can reach me directly at kaworukevin@gmail.com for any follow-up questions or clarifications. Thank you for your time and support!
Best regards, kaworukevin
### Versions
Dear PyTorch Team,
I am encountering an issue with PyTorch where my GPU, an NVIDIA GeForce RTX 5070 Ti (Blackwell architecture, expected Compute Capability sm_90), is being misidentified as sm_120. This is causing compatibility issues when running applications like FramePack, resulting in the error "CUDA error: no kernel image is available for execution on the device." I am reaching out to seek your assistance in resolving this issue.
System Details:
Operating System: Windows 11
GPU: NVIDIA GeForce RTX 5070 Ti (Blackwell architecture, should be sm_90)
NVIDIA Driver Version: 576.02 (confirmed to be the latest as of April 26, 2025)
PyTorch Version: 2.7.0+cu128
CUDA Toolkit Version: 12.8 (driver supports CUDA 12.9)
Python Version: 3.10
Issue Description:When I run the following command to check the Compute Capability:
python -c "import torch; print(torch.__version__); print(torch.cuda.get_device_capability())"
The output is:
2.7.0+cu128
(12, 0)
However, the expected Compute Capability for the Blackwell architecture (RTX 5070 Ti) should be sm_90 (Compute Capability 9.0). PyTorch warns that sm_120 is not compatible with the current installation, which supports sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. This misrecognition leads to CUDA errors when running GPU-based applications.
Steps Taken:
Updated PyTorch to the latest version (2.7.0+cu128) using:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
Updated my NVIDIA driver to the latest version (576.02).
Attempted to recompile PyTorch from source with TORCH_CUDA_ARCH_LIST=9.0, but the issue persists.
Request for Assistance:Could you please help me understand why PyTorch is misidentifying my GPU as sm_120 instead of sm_90? Are there any known issues with Blackwell architecture support in PyTorch 2.7.0, or could this be related to driver compatibility? I would greatly appreciate any guidance on how to resolve this issue, whether through a configuration change, a patch, or further debugging steps.
You can reach me directly at kaworukevin@gmail.com for any follow-up questions or clarifications. Thank you for your time and support!
Best regards, kaworukevin
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,021,077,739
|
DISABLED test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41181906048).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 323, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,021,077,684
|
DISABLED test_pending_fusions_multiple (__main__.TestPrologueFusion)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pending_fusions_multiple&suite=TestPrologueFusion&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41181061777).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pending_fusions_multiple`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 1528, in test_pending_fusions_multiple
).run(code[0])
RuntimeError: Expected to not find ".run(" but found it
# Topologically Sorted Source Nodes: [relu], Original ATen: [aten.relu]
stream0 = get_raw_stream(0)
triton_poi_fused_relu_1.run(buf1, 16384, stream=stream0)
~~~~~ <--- HERE
return (buf1, )
From CHECK-NOT: .run(
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_max_autotune.py TestPrologueFusion.test_pending_fusions_multiple
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_max_autotune.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,021,075,754
|
[C10D] Allow NCCL single P2P ops to use parent/collective communicator
|
Edenzzzz
|
open
|
[
"oncall: distributed",
"triaged",
"module: nccl"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
As discussed in some previous PR/RFC (https://github.com/pytorch/pytorch/pull/129147, https://github.com/pytorch/pytorch/issues/129140), passing in `device_id` into `init_process_group` will eagerly init the parent NCCL communicator, and subsequent P2P calls will use that instead of creating many rank pair-wise comms.
However, in the [latest code](https://github.com/pytorch/pytorch/blob/9e50c21e27268dcd4dbf82de26e7a2094b88d363/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L3823) it seems that non-batched P2P calls (`ncclSend`, `ncclRecv`, and perhaps even `all_to_all`) will use rank pair-wise keys to create comms, which costs more warmup time and memory. I wonder if we should use the same `key = getKeyFromDevice(device)` for P2P and batch P2P for better efficiency.
Thanks!
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @kwen2501
### Alternatives
_No response_
### Additional context
_No response_
| true
|
3,021,066,167
|
Have compiled autograd config API support nested compilation
|
xmfan
|
open
|
[
"triaged",
"oncall: pt2",
"module: compiled autograd"
] | 0
|
MEMBER
|
### 🐛 Describe the bug
e.g. in the modded-nanogpt speedrun, we have some custom op that has another torch.compile inside of it. This will raise `RuntimeError: compiled_autograd._enable() requires no threads in backwards()` if we use the config API. Using the context manager is fine in this case, because the nested compile runs as an inference graph and never ends up calling autograd
```python
@torch.library.custom_op("nanogpt::mm_backward", mutates_args=())
def mm_backward_op(g: Tensor, x_f8: Tensor, w_f8: Tensor, x_s: float, w_s: float, grad_s: float) -> tuple[Tensor, Tensor]:
@torch.compile
def impl(grad: Tensor, x_f8: Tensor, w_f8: Tensor):
assert grad.is_contiguous()
x_inv_s = grad.new_tensor(x_s, dtype=torch.float32)
w_inv_s = grad.new_tensor(w_s, dtype=torch.float32)
grad_inv_s = grad.new_tensor(grad_s, dtype=torch.float32)
grad_f8 = grad.div(grad_s).to(torch.float8_e5m2)
grad_x = torch._scaled_mm(
grad_f8,
w_f8.T.contiguous().T,
out_dtype=torch.bfloat16,
scale_a=grad_inv_s,
scale_b=w_inv_s,
use_fast_accum=False,
)
# faster than grad_f8_t @ x_f8, for (d_out, d_in) == (50304, 768)
grad_w = torch._scaled_mm(
x_f8.T.contiguous(),
grad_f8.T.contiguous().T,
out_dtype=torch.float32,
scale_a=x_inv_s,
scale_b=grad_inv_s,
use_fast_accum=False,
).T
return grad_x, grad_w
return impl(g, x_f8, w_f8)
```
### Versions
main
cc @chauhang @penguinwu
| true
|
3,021,060,951
|
Require EasyCLA check even when force merging
|
ZainRizvi
|
closed
|
[
"topic: not user facing",
"test-config/xla"
] | 3
|
CONTRIBUTOR
|
Always require EasyCLA to pass before merging
| true
|
3,021,053,662
|
[not for land] functionalization hack to try making mutations on graph input slices more efficient
|
bdhirsh
|
open
|
[
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
not for land since this still has silent correctness problems
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152217
| true
|
3,021,050,230
|
[TF32][CUDA] account for TF32 in `test_linear_autograd`
|
eqy
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"module: tf32",
"ciflow/trunk",
"topic: not user facing",
"matrix multiplication",
"Blackwell"
] | 3
|
COLLABORATOR
|
Abate some more noise seen on blackwell
cc @ptrblck @msaroufim @jerryzh168 @zasdfgbnm
| true
|
3,021,050,001
|
Improve error handling in CachingAutotuner for argument mismatches
|
ShreyRoy
|
open
|
[
"triaged",
"open source",
"module: inductor",
"release notes: inductor"
] | 3
|
NONE
|
Fixes #147690
Adds a check in CachingAutotuner.run() to validate that the number of provided arguments matches the expected number of launcher arguments.
If there is a mismatch, a clear TypeError is raised, specifying the expected and actual argument counts.
This improves the debuggability of kernel launch failures, providing a more informative error message instead of a low-level runtime exception.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,021,036,298
|
[MPS/inductor] Fix the approximation of polygamma for n == 0.
|
dcci
|
closed
|
[
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
Fixes #152205
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,021,032,974
|
Outdated install commands
|
mcandre
|
open
|
[
"module: docs",
"triaged",
"actionable"
] | 2
|
NONE
|
Python now recommends using the pip module as opposed to the pip executable, which tends to integrate better with (virtualenv) isolated Python environment sandboxes.
The install commands that this documentation page generates:
https://pytorch.org/get-started/locally/
Should replace `pip3 install`... with `python3 -m pip install`...
As an aside, note that both `pip3` and `python3` are broken in various RHEL environments, where the commands are forcibly suffixed with the minor version e.g. `python3.12`.
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
3,021,025,121
|
Have cherry-pick bot always add the current release to the PR
|
ZainRizvi
|
open
|
[
"oncall: releng",
"module: ci",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Goal is to make sure that any PR someone attempts to cherry pick gets the release milestone added to it for tracking.
Whenever a cherry-pick is requested on a PR, we should first try to add the current release milestone before attempting the actual cherry-pick (do it in that order in case the cherry-pick fails).
One tricky part might be automatically determining the current milestone since we create those in advance. Checking the version the latest release branch could do the trick
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,020,997,121
|
Mini tutorial for provenance tracking
|
yushangdi
|
open
|
[
"release notes: export"
] | 6
|
CONTRIBUTOR
|
as title
| true
|
3,020,993,360
|
Move mps_linear forward to use MPS kernels directly instead of MPSGraph
|
jhavukainen
|
open
|
[
"triaged",
"open source",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
This PR moves `mps_linear` to use MPSNDArrays and call into the MPS kernel directly instead of going through MPSGraph. It also adds a caching mechanism for reusing MPS kernels as there is also a small overhead attached to creating the kernel object.
The impact of the improvement is relatively more significant for small input kernels where the MPSGraph overhead represents a larger portion of the overall execution time of the operation but the speedup shows for both small and large input sizes as expected.
`mps_linear` before the changes:
```
input shapes: f32:[1,1,20], f32:[1,20]
torch.linear time: <torch.utils.benchmark.utils.common.Measurement object at 0x109d67110>
func(*args, **kwargs)
Median: 199.29 us
IQR: 9.56 us (196.71 to 206.27)
979 measurements, 1 runs per measurement, 1 thread
input shapes: f32:[1,1,5120], f32:[13284,5120]
torch.linear time: <torch.utils.benchmark.utils.common.Measurement object at 0x1063b4510>
func(*args, **kwargs)
Median: 979.29 us
IQR: 25.29 us (964.83 to 990.13)
205 measurements, 1 runs per measurement, 1 thread
```
`mps_linear` after the changes:
```
input shapes: f32:[1,1,20], f32:[1,20]
torch.linear time: <torch.utils.benchmark.utils.common.Measurement object at 0x10693a190>
func(*args, **kwargs)
Median: 176.08 us
IQR: 15.02 us (172.42 to 187.44)
1103 measurements, 1 runs per measurement, 1 thread
input shapes: f32:[1,1,5120], f32:[13284,5120]
torch.linear time: <torch.utils.benchmark.utils.common.Measurement object at 0x10d524dd0>
func(*args, **kwargs)
Median: 952.56 us
IQR: 15.63 us (945.47 to 961.10)
210 measurements, 1 runs per measurement, 1 thread
```
cc @kulinseth @albanD @malfet @DenisVieriu97
| true
|
3,020,982,686
|
[CI] docker images use tags instead of image name
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Change CI docker images to be `ci-image:<image name>-<folder sha>` instead of `<image name>:<folder sha>` so we never have to make a new ecr repo ever again
Pros:
never have to make a new ecr repo ever again
Cons:
if it aint broken, dont fix it?
Don't need to change linux-test images since they use the "full name" of the image with the docker registry and the tag
In order to prevent others needing to rebase past this PR, also push the image to the "old name". This can be removed after this PR has been in main for a while
| true
|
3,020,930,299
|
Add support for torch.cuda.FloatTensor()
|
jijiew
|
open
|
[
"module: inductor",
"module: dynamo",
"release notes: dynamo"
] | 4
|
CONTRIBUTOR
|
Fixes #130722
Add support for torch.cuda.FloatTensor()
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,020,906,468
|
[invoke_subgraph] Use backward identifier for min-cut parititioning
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152494
* #152490
* #152384
* #152383
* #152357
* __->__ #152207
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,020,847,690
|
[dynamo] remove dead code for DATA_PTR_MATCH
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: Seems this guard is not created anywhere
Test Plan: CI
Differential Revision: D73682084
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,020,786,322
|
[MPS/Inductor] polygamma is miscompiled for some inputs
|
dcci
|
closed
|
[
"triaged",
"module: mps",
"oncall: pt2",
"module: inductor"
] | 0
|
MEMBER
|
### 🐛 Describe the bug
Repro:
```
>>> import torch
>>> torch.special.polygamma(0, torch.tensor([2]))
tensor([0.4228])
>>> torch.special.polygamma(0, torch.tensor([2]).to('mps'))
tensor([0.4228], device='mps:0')
>>> torch.compile(lambda x: torch.special.polygamma(0, x))(torch.tensor([2], device='mps'))
tensor([-inf], device='mps:0')
```
### Versions
Apple M1
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,020,730,613
|
[MPS] Fix ICE for entr bool instantiation on M1/M2
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147893
* __->__ #152204
By instantiating it implicitly, otherwise attempts to run something like
```
% python3 -c "import torch; print(torch.special.entr(torch.testing.make_tensor(10, dtype=torch.bool, device='mps')))"
```
will fail with
```
Failed to created pipeline state object, error: Error Domain=AGXMetalG14X Code=3 "Compiler encountered an internal error"
```
Similar in spirit to https://github.com/pytorch/pytorch/pull/149123
| true
|
3,020,703,805
|
[CUDA][conv3d] bump tolerances for `test_variant_consistency_eager` `conv3d` `complex64`
|
eqy
|
closed
|
[
"module: cuda",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
~1/1000 1.5e-5 mismatch on A100
cc @ptrblck @msaroufim @jerryzh168
| true
|
3,020,698,823
|
Speed-up time spent in generating shaped str keys
|
jhavukainen
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 3
|
COLLABORATOR
|
Replaces the janky way of using the IntArrayRef to create an NSArray to ask for it to provide its contents in a string format with use of stringstream.
This speeds up the call for getting the key string for caching (or reading from cache) for shaped inputs by ~5x. While the actual wall time, depending on the number of input tensors, is only some microseconds this time represents non-negligible chunk of the overall time spent in preparing to dispatch work to the GPU. And since this function gets called on every time a (cacheable) operation in MPS is used it should be a small but broadly impacting time saver.
Using mps_linear as an example. Note this is before PR https://github.com/pytorch/pytorch/pull/152199 so it only captures the CPU time spent in the op call:
Before the change:
```
torch.linear time: <torch.utils.benchmark.utils.common.Measurement object at 0x1108f07d0>
func(*args, **kwargs)
Median: 22.75 us
IQR: 0.87 us (22.50 to 23.38)
8361 measurements, 1 runs per measurement, 1 thread
```
After the change:
```
torch.linear time: <torch.utils.benchmark.utils.common.Measurement object at 0x108875350>
func(*args, **kwargs)
Median: 18.67 us
IQR: 0.46 us (18.50 to 18.96)
10342 measurements, 1 runs per measurement, 1 thread
```
Which aligns with the observed change for getTensorStringKeys() taking ~1us instead of ~5us in mps_linear op I got from a point measurement sandwiching the function call with `std::chrono::high_resolution_clock`.
cc @kulinseth @albanD @malfet @DenisVieriu97
| true
|
3,020,693,469
|
DISABLED test_reduce_stress_cuda (__main__.ProcessGroupGlooLazyInitTest)
|
jithunnair-amd
|
open
|
[
"module: rocm",
"triaged",
"skipped"
] | 1
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it failed on the MI300 runners in [151368](https://github.com/pytorch/pytorch/pull/151368): https://github.com/pytorch/pytorch/actions/runs/14502441175/job/40686794743
The `stress_cuda` tests seem to be flaky.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,020,653,804
|
[submodule] Update ONNX to 1.18
|
cyyever
|
open
|
[
"oncall: jit",
"triaged",
"open source",
"NNC",
"ciflow/binaries",
"ciflow/trunk",
"release notes: onnx",
"ciflow/periodic"
] | 8
|
COLLABORATOR
|
ONNX 1.18 is about to release soon. Its third-party module is now updated to RC2 to verify whether PyTorch can use it with necessary changes. After 1.18 has been released, it will be updated to that.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,020,642,426
|
Synchronize mps backend in the timer
|
jhavukainen
|
open
|
[
"triaged",
"open source",
"release notes: benchmark"
] | 3
|
COLLABORATOR
|
Add synchronization for the MPS op measurements with the timer class in benchmark utils. This enables measuring the true execution time when we wait for the GPU results.
Test output from calling linear op before the change (which ignores waiting for the GPU result):
```
torch.linear time: <torch.utils.benchmark.utils.common.Measurement object at 0x1108f07d0>
func(*args, **kwargs)
Median: 22.75 us
IQR: 0.87 us (22.50 to 23.38)
8361 measurements, 1 runs per measurement, 1 thread
```
and after the change the measurement accounting for the GPU result turnaround time
```
torch.linear time: <torch.utils.benchmark.utils.common.Measurement object at 0x10a8cd110>
func(*args, **kwargs)
Median: 245.08 us
IQR: 22.40 us (235.73 to 258.13)
815 measurements, 1 runs per measurement, 1 thread
```
| true
|
3,020,603,537
|
[inductor] propagate shapes in CSEVariable
|
isuruf
|
open
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152198
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,020,603,246
|
Add detailed triton kernel logging to tlparse
|
jamesjwu
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: AO frontend",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152197
This PR adds detailed logging of each triton kernel we compile, and its autotune result, to every kernel we compile with triton. We add these results to a global variable that we then clear after each triton kernel compile.
We can't keep these objects around after compile time, so we can't record the autotune cache save or coordinate descent tuning, unfortunately, but we can log at least:
- The duration of compilation
- Whether or not autotune cache hit
- The best autotuning config, if there's only one.
Example triton kernel info: https://gist.github.com/jamesjwu/493bdd0f36b0b7e3ca327f87bd6c2c75
See internal diff for an example log for internal model.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D73674443](https://our.internmc.facebook.com/intern/diff/D73674443)
| true
|
3,020,548,187
|
[ONNX] add converters for sym_min, sym_max
|
xadupre
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 3
|
COLLABORATOR
|
Conversion of Phi4-multimodel-instruct fails because of missing converters for torch.sym_max, and torch.sym_min.
| true
|
3,020,467,918
|
SAC: fix recompute tag propagation for ops with list[tensor] inputs
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: autograd",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
There's an "are we compiling" check in SAC, which we rely on to know when to propagate recompute tags during tracing.
This check was a bit brittle, and missed cases where input ops accept list of tensors - I updated it to check if a `FunctionalTensorMode` is active, which should be a 100% reliable way to know if AOTDispatcher is in the middle of running.
There is a long-standing followup here around unifying `torch.compiler.is_compiling()` to work in all cases. We should probably just update it to always check if FakeMode/FunctionalMode are active and use it there. This has a bit of BC risk though so I opted for the more local fix to SAC.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151719
* #152688
* __->__ #152195
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,020,462,474
|
SAC: fix recompute tag propagation for ops with list[tensor] inputs
|
bdhirsh
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,020,458,351
|
SAC: fix recompute tag propagation for ops with list[tensor] inputs
|
bdhirsh
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,020,413,080
|
xpu: get xpu arch flags at runtime in cpp_extensions
|
dvrogozh
|
open
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu",
"release notes: xpu",
"module: xpu"
] | 9
|
CONTRIBUTOR
|
This commit moves query for xpu arch flags to runtime when building SYCL extensions which allows to adjust `TORCH_XPU_ARCH_LIST` at python script level. That's handy for example in ci test which gives a try few variants of the list.
CC: @malfet, @jingxu10, @EikanWang, @guangyey
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,020,288,662
|
[`Torch 2.7.0 x Py 3.9`] Incompatible dep versions with networkx
|
vasqu
|
closed
|
[
"triage review",
"oncall: releng",
"module: regression",
"module: third_party"
] | 16
|
NONE
|
### 🐛 Describe the bug
There seems to be a bug in the new release of torch (2.7.0) when using py 3.9. This is caused by the usage of `networkx` which is not pinned in the dependencies, while it has dropped the support for py 3.9. Example error log from https://github.com/huggingface/transformers/pull/37695
```
File "/usr/local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 62, in <module>
from .integrations.flex_attention import flex_attention_forward
File "/usr/local/lib/python3.9/site-packages/transformers/integrations/flex_attention.py", line 39, in <module>
from torch.nn.attention.flex_attention import BlockMask, flex_attention
File "/usr/local/lib/python3.9/site-packages/torch/nn/attention/flex_attention.py", line 15, in <module>
from torch._dynamo._trace_wrapped_higher_order_op import TransformGetItemToIndex
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/__init__.py", line 53, in <module>
from .polyfills import loader as _ # usort: skip # noqa: F401
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/polyfills/loader.py", line 25, in <module>
POLYFILLED_MODULES: tuple["ModuleType", ...] = tuple(
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/polyfills/loader.py", line 26, in <genexpr>
importlib.import_module(f".{submodule}", package=polyfills.__name__)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/polyfills/builtins.py", line 31, in <module>
def all(iterable: Iterable[object], /) -> bool:
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/decorators.py", line 427, in wrapper
rule_map: dict[Any, type[VariableTracker]] = get_torch_obj_rule_map()
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 2870, in get_torch_obj_rule_map
obj = load_object(k)
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 2901, in load_object
val = _load_obj_from_str(x[0])
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 2885, in _load_obj_from_str
return getattr(importlib.import_module(module), obj_name)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.9/site-packages/torch/_higher_order_ops/map.py", line 6, in <module>
from torch._functorch.aot_autograd import AOTConfig, create_joint
File "/usr/local/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 135, in <module>
from .partitioners import default_partition
File "/usr/local/lib/python3.9/site-packages/torch/_functorch/partitioners.py", line 37, in <module>
from ._activation_checkpointing.graph_info_provider import GraphInfoProvider
File "/usr/local/lib/python3.9/site-packages/torch/_functorch/_activation_checkpointing/graph_info_provider.py", line 3, in <module>
import networkx as nx
File "/usr/local/lib/python3.9/site-packages/networkx/__init__.py", line 19, in <module>
from networkx import utils
File "/usr/local/lib/python3.9/site-packages/networkx/utils/__init__.py", line 7, in <module>
from networkx.utils.backends import *
File "/usr/local/lib/python3.9/site-packages/networkx/utils/backends.py", line 258, in <module>
backends = _get_backends("networkx.backends")
File "/usr/local/lib/python3.9/site-packages/networkx/utils/backends.py", line 234, in _get_backends
items = entry_points(group=group)
TypeError: entry_points() got an unexpected keyword argument 'group'
```
For a deeper dive, see https://github.com/huggingface/transformers/pull/37695#issuecomment-2830506423 - I'd suggest pinning the version or something of the sort.
### Versions
`torch==2.7.0`
`python==3.9`
| true
|
3,020,240,259
|
HUD Dashboard sort by perf speedup doesn't do anything
|
zou3519
|
open
|
[
"triaged",
"bug",
"module: devx"
] | 3
|
CONTRIBUTOR
|

the up arrow next to "perf. speedup" lets me sort by asc or dsc but it doesn't actually change the chart
cc @ZainRizvi @huydhn @clee2000 @pytorch/pytorch-dev-infra
| true
|
3,020,240,127
|
[BE]: Use typing.get_args in torch/types
|
Skylion007
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
3,020,239,775
|
The input for layers other than the first layer should be the hidden state from the previous layer.
|
JinQi-Tang
|
open
|
[
"module: nn",
"module: rnn",
"triaged"
] | 0
|
NONE
|
https://github.com/pytorch/pytorch/blame/134179474539648ba7dee1317959529fbd0e7f89/torch/nn/modules/rnn.py#L499
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
3,020,212,746
|
test(Conv3d): use correct class for `test_Conv3d_module_same_padding`
|
Alvaro-Kothe
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
The test for the class `Conv3d` is calling `Conv2d`. This PR just ensure that we are testing the correct module.
| true
|
3,020,205,467
|
Unskip index_put in cudagraphs
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152186
The repro from the original skip in https://github.com/pytorch/pytorch/pull/105439 does not fail. unskip.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,020,202,368
|
GroupNorm compilation errors on UNet-based architecture on torch >= 2.6.0
|
GLivshits
|
open
|
[
"module: nn",
"triaged",
"module: norms and normalization",
"oncall: pt2"
] | 0
|
NONE
|
### 🐛 Describe the bug
New torch versions - new bugs! I've used to cope with compilation issues on the diffusion model architectures from 2.4.0 to 2.5.1 (on which it finally work with TORCHINDUCTOR_LAYOUT_OPTIMIZATION=0), for example https://github.com/pytorch/pytorch/issues/133571.
On 2.7.0 and 2.6.0 an error occurs when I try to use GroupNorm with my architecture + FSDP + compile (note issue https://github.com/pytorch/pytorch/issues/97623 mentioning the same problem).
Repro code:
```python
import argparse
import os
import random
from contextlib import nullcontext
from typing import List, Optional
import torch
import torch.distributed as dist
import torch.nn.functional as F
from einops import rearrange
from torch import nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision, ShardingStrategy
from torch.distributed.fsdp.sharded_grad_scaler import ShardedGradScaler
from torch.distributed.fsdp.wrap import ModuleWrapPolicy
from torch.nn import RMSNorm
from torch.nn.parallel import DistributedDataParallel
from tqdm.auto import tqdm
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.allow_tf32 = True
torch.backends.cuda.matmul.allow_tf32 = True
torch._dynamo.config.cache_size_limit = 128
torch._dynamo.config.optimize_ddp = False
torch.profiler._utils._init_for_cuda_graphs()
def setup(rank, world_size):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
torch.cuda.set_device(rank)
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def zero_module(module):
"""
Zero out the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().zero_()
return module
class SelfAttention(nn.Module):
def __init__(self, input_dim: int, out_dim: int, d_head: int):
super().__init__()
self.input_dim = input_dim
self.out_dim = out_dim
self.d_head = d_head
self.n_heads = self.out_dim // self.d_head
self.d_attn = self.out_dim
self.pre_norm = nn.LayerNorm(input_dim)
self.qkv_proj = nn.Linear(input_dim, 3 * self.d_attn, bias=False)
self.q_norm = RMSNorm(self.d_attn, eps=1e-6)
self.k_norm = RMSNorm(self.d_attn, eps=1e-6)
self.to_out = nn.Linear(self.d_attn, self.out_dim)
def forward(
self,
x: torch.Tensor,
cond: Optional[torch.Tensor] = None,
cond_mask: Optional[torch.Tensor] = None,
):
b, c, h, w = x.shape
x = x.permute(0, 2, 3, 1).view(b, h * w, c)
x = self.pre_norm(x)
q, k, v = self.qkv_proj(x).chunk(dim=-1, chunks=3)
q = self.q_norm(q)
k = self.k_norm(k)
q = rearrange(q, "b n (h d) -> b h n d", h=self.n_heads)
k = rearrange(k, "b n (h d) -> b h n d", h=self.n_heads)
v = rearrange(v, "b n (h d) -> b h n d", h=self.n_heads)
out = F.scaled_dot_product_attention(q, k, v)
out = rearrange(out, "b h n d -> b n (h d)", h=self.n_heads)
out = self.to_out(out)
return out.permute(0, 2, 1).view(b, out.shape[-1], h, w)
class CrossAttention(nn.Module):
def __init__(self, input_dim: int, cond_dim: int, out_dim: int, d_head: int):
super().__init__()
self.input_dim = input_dim
self.cond_dim = cond_dim
self.out_dim = out_dim
self.d_head = d_head
self.n_heads = self.out_dim // self.d_head
self.d_attn = self.out_dim
self.pre_norm = nn.LayerNorm(input_dim)
self.cond_pre_norm = nn.LayerNorm(cond_dim)
self.q_proj = nn.Linear(input_dim, self.d_attn, bias=False)
self.kv_proj = nn.Linear(cond_dim, 2 * self.d_attn, bias=False)
self.q_norm = RMSNorm(self.d_attn, eps=1e-6)
self.k_norm = RMSNorm(self.d_attn, eps=1e-6)
self.to_out = nn.Linear(self.d_attn, self.out_dim)
def forward(
self,
x: torch.Tensor,
cond: torch.Tensor,
cond_mask: Optional[torch.Tensor] = None,
):
b, c, h, w = x.shape
x = x.permute(0, 2, 3, 1).view(b, h * w, c)
x = self.pre_norm(x)
cond = self.cond_pre_norm(cond)
q = self.q_proj(x)
k, v = self.kv_proj(cond).chunk(dim=-1, chunks=2)
q = self.q_norm(q)
k = self.k_norm(k)
q = rearrange(q, "b n (h d) -> b h n d", h=self.n_heads)
k = rearrange(k, "b n (h d) -> b h n d", h=self.n_heads)
v = rearrange(v, "b n (h d) -> b h n d", h=self.n_heads)
if cond_mask is not None:
cond_mask = cond_mask.unsqueeze(1).unsqueeze(1)
out = F.scaled_dot_product_attention(q, k, v, attn_mask=cond_mask)
out = rearrange(out, "b h n d -> b n (h d)", h=self.n_heads)
out = self.to_out(out)
return out.permute(0, 2, 1).view(b, out.shape[-1], h, w)
class Upsample(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x: torch.Tensor, *args, **kwargs):
x = F.interpolate(x, scale_factor=2, mode="nearest")
return x
class Downsample(nn.Module):
def __init__(self):
super().__init__()
self.op = nn.AvgPool2d(kernel_size=2, stride=2)
def forward(self, x: torch.Tensor, *args, **kwargs):
return self.op(x)
class Sequential(nn.Sequential):
def forward(self, x, *args, **kwargs):
for layer in self:
x = layer(x, *args, **kwargs)
return x
class SpatialLayerNorm(nn.LayerNorm):
def forward(self, x: torch.Tensor):
return super().forward(x.permute(0, 2, 3, 1)).permute(0, 3, 1, 2)
class ResBlock(nn.Module):
def __init__(
self,
channels: int,
dropout: float,
out_channels: Optional[int] = None,
mid_channels: Optional[int] = None,
use_conv: bool = False,
up: bool = False,
down: bool = False,
norm_groups: int = 32,
):
super().__init__()
self.channels = channels
self.dropout = dropout
self.out_channels = out_channels or channels
self.mid_channels = mid_channels or self.out_channels
self.use_conv = use_conv
self.in_layers = nn.ModuleList(
[
nn.GroupNorm(
num_channels=channels, num_groups=norm_groups, eps=1e-6, affine=True
),
nn.SiLU(),
nn.Conv2d(channels, self.mid_channels, 3, padding=1),
]
)
self.in_layers_len = len(self.in_layers)
self.updown = up or down
if up:
self.h_upd = Upsample()
self.x_upd = Upsample()
elif down:
self.h_upd = Downsample()
self.x_upd = Downsample()
else:
self.h_upd = self.x_upd = nn.Identity()
self.out_layers = nn.ModuleList(
[
nn.GroupNorm(
num_channels=self.mid_channels,
num_groups=norm_groups,
eps=1e-6,
affine=True,
),
nn.SiLU(),
nn.Dropout(p=dropout),
zero_module(
nn.Conv2d(self.mid_channels, self.out_channels, 3, padding=1)
),
]
)
self.out_layers_len = len(self.out_layers)
if use_conv:
self.skip_connection = nn.Conv2d(channels, self.out_channels, 1)
else:
if self.out_channels == channels:
self.skip_connection = nn.Identity()
else:
self.skip_connection = nn.Conv2d(channels, self.out_channels, 1)
def forward(self, x: torch.Tensor, *args, **kwargs):
h = x
for i in range(self.in_layers_len - 1):
h = self.in_layers[i](h)
if self.updown:
h = self.h_upd(h)
x = self.x_upd(x)
h = self.in_layers[self.in_layers_len - 1](h)
for i in range(self.out_layers_len):
h = self.out_layers[i](h)
out = self.skip_connection(x) + h
return out
class UNet(nn.Module):
def __init__(
self,
in_dim: int,
cond_dim: int,
channels: List[int],
attns: List[int],
middle_attns: int = 0,
):
super().__init__()
assert len(attns) == len(channels) - 1
self.in_dim = in_dim
self.down_blocks = nn.ModuleList([])
self.up_blocks = nn.ModuleList([])
ch = channels[0]
in_chs = [ch]
self.in_block = nn.Conv2d(in_dim, channels[0], kernel_size=3, padding=1)
for i, (ch, out_ch) in enumerate(zip(channels[:-1], channels[1:])):
layer = [ResBlock(ch, 0.0, out_ch, out_ch)]
if attns[i] > 0:
for _ in range(attns[i]):
layer.append(SelfAttention(out_ch, out_ch, 64))
layer.append(CrossAttention(out_ch, cond_dim, out_ch, 64))
layer.append(ResBlock(out_ch, 0.0, out_ch, out_ch, down=True))
self.down_blocks.append(Sequential(*layer))
in_chs.append(out_ch)
layer = [ResBlock(out_ch, 0.0, out_ch, out_ch)]
if middle_attns > 0:
for _ in range(middle_attns):
layer.append(SelfAttention(out_ch, out_ch, 64))
layer.append(CrossAttention(out_ch, cond_dim, out_ch, 64))
layer.append(ResBlock(out_ch, 0.0, out_ch, out_ch))
self.middle_block = Sequential(*layer)
for i, (ch1, ch2) in enumerate(zip(channels[::-1][:-1], channels[::-1][1:])):
i = len(attns) - 1 - i
ch = ch1 + in_chs.pop()
out_ch = ch2
layer = [ResBlock(ch, 0.0, out_ch, out_ch)]
if attns[i] > 0:
for _ in range(attns[i]):
layer.append(SelfAttention(out_ch, out_ch, 64))
layer.append(CrossAttention(out_ch, cond_dim, out_ch, 64))
layer.append(ResBlock(out_ch, 0.0, out_ch, out_ch, up=True))
self.up_blocks.append(Sequential(*layer))
self.out_block = zero_module(
nn.Conv2d(out_ch, in_dim, kernel_size=3, padding=1)
)
# Register dummy buffer
self.register_buffer("dummy_buffer", torch.tensor([1.0, 1.1, 1.2, 1.3]), persistent=False)
def forward(
self,
x: torch.Tensor,
cond: torch.Tensor,
cond_mask: Optional[torch.Tensor] = None,
):
res = []
x = x * self.dummy_buffer.view(1, -1, 1, 1)
x = self.in_block(x)
for layer in self.down_blocks:
x = layer(x, cond, cond_mask)
res.append(x)
x = self.middle_block(x, cond, cond_mask)
for layer in self.up_blocks:
x = torch.cat([x, res.pop()], dim=1)
x = layer(x, cond, cond_mask)
x = self.out_block(x)
return x
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--batch_size", type=int, default=32)
parser.add_argument("--num_iterations", type=int, default=200)
parser.add_argument("--use_ddp", action="store_true")
parser.add_argument("--use_fsdp", action="store_true")
parser.add_argument("--use_compile", action="store_true")
args = parser.parse_args()
return args
def main(rank, world_size, args):
setup(rank, world_size)
assert not (args.use_ddp and args.use_fsdp)
device = torch.device(f"cuda:{rank}")
dtype = torch.bfloat16
cond_dim = 1024
cond_len = 128
model = UNet(4, cond_dim, [128, 256, 512, 512], [2, 2, 2], 2).to(device)
if args.use_fsdp:
model = FSDP(
module=model,
device_id=rank,
use_orig_params=args.use_compile,
sharding_strategy=ShardingStrategy.HYBRID_SHARD,
auto_wrap_policy=ModuleWrapPolicy({nn.Sequential}),
mixed_precision=MixedPrecision(
param_dtype=dtype,
buffer_dtype=dtype,
reduce_dtype=dtype,
),
)
loss_amp_context = torch.amp.autocast("cuda", dtype=dtype, enabled=True)
model_amp_context = nullcontext()
scaler = ShardedGradScaler(enabled=dtype == torch.float16)
else:
if args.use_ddp:
model = DistributedDataParallel(
model,
broadcast_buffers=False,
gradient_as_bucket_view=True,
find_unused_parameters=False,
)
loss_amp_context = torch.amp.autocast("cuda", dtype=dtype, enabled=True)
model_amp_context = loss_amp_context
scaler = torch.amp.GradScaler("cuda", enabled=dtype == torch.float16)
if args.use_compile:
print("Trying compile.")
model.compile(mode="default")
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5, betas=(0.9, 0.98))
iterator = range(args.num_iterations)
if rank == 0:
iterator = tqdm(iterator, total=args.num_iterations)
for _ in iterator:
spatial_size = (64, 64) # (72, 56) if random.random() > 0.5 else (64, 64)
x = torch.randn(args.batch_size, 4, *spatial_size, device=device)
cond = torch.randn(args.batch_size, cond_len, cond_dim, device=device)
cond_mask = torch.randn(args.batch_size, cond_len, device=device) > 0
with model_amp_context:
out = model(x, cond, cond_mask)
with loss_amp_context:
loss = F.mse_loss(x, out)
loss_test = loss.clone() # Ensure local loss is not changed by allreduce
torch.distributed.all_reduce(loss_test) # Check if any gpu has NaN loss
if rank == 0:
iterator.set_description(f"Loss: {loss_test.item()}")
if torch.isnan(loss_test):
raise ValueError("NaN loss.")
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
cleanup()
if __name__ == "__main__":
args = parse_args()
world_size = torch.cuda.device_count()
if world_size == 1:
main(0, world_size, args)
else:
torch.multiprocessing.spawn(
fn=main, args=(world_size, args), nprocs=world_size, join=True
)
```
Launch script:
`TORCHDYNAMO_VERBOSE=1 TORCH_LOGS=recompiles,graph_breaks CUDA_VISIBLE_DEVICES=0 python compile_debug.py --use_fsdp --use_compile`
Code output:
> /home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py:430: UserWarning: FSDP is switching to use `NO_SHARD` instead of ShardingStrategy.HYBRID_SHARD since the world size is 1.
> warnings.warn(
> Trying compile.
> 0%| | 0/200 [00:00<?, ?it/s][rank0]:W0425 17:17:40.570000 814000 site-packages/torch/_logging/_internal.py:1130] [2/0] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] Graph break in user code at /home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py:2776
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] Graph Break Reason: Unsupported method call
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] Explanation: Dynamo does not know how to trace method `data_ptr` of class `<unknown type>`
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] Hint: Avoid calling `<unknown type>.data_ptr` in your code.
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] Hint: Please report an issue to PyTorch.
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks]
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] Developer debug context: call_method UntypedStorageVariable() data_ptr [] {}
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks]
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] User code traceback:
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/workspace/compile_debug.py", line 410, in <module>
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] main(0, world_size, args)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/workspace/compile_debug.py", line 387, in main
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] out = model(x, cond, cond_mask)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] return forward_call(*args, **kwargs)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 856, in forward
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] output = self._fsdp_wrapped_module(*args, **kwargs)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] return self._call_impl(*args, **kwargs)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] return forward_call(*args, **kwargs)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks]
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] ========== most recent `torch.compile` tracing attempt started here ==========
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks]
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/workspace/compile_debug.py", line 307, in forward
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] x = layer(x, cond, cond_mask)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 842, in forward
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] args, kwargs = _pre_forward(
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/_runtime_utils.py", line 382, in _pre_forward
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] unshard_fn(state, handle)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/_runtime_utils.py", line 417, in _pre_forward_unshard
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] _unshard(state, handle, state._unshard_stream, state._pre_unshard_stream)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/_runtime_utils.py", line 290, in _unshard
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] ran_pre_unshard = handle.pre_unshard()
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 1286, in pre_unshard
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] ret = self._writeback_orig_params()
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] return func(*args, **kwargs)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 2279, in _writeback_orig_params
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] or not _same_storage(param, flat_param_tensor)
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py", line 2776, in _same_storage
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] return a.untyped_storage().data_ptr() == b.untyped_storage().data_ptr()
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks]
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks] NOTE: the most recent `torch.compile` tracing attempt might not be where you applied `torch.compile`! This is due to how graph breaks are implemented - the optimized code object returned by Dynamo will call another Dynamo-generated resume function and tracing is re-enabled by calling the resume function as a normal Python function, which Dynamo intercepts as a top-level frame.
> [rank0]:V0425 17:17:40.614000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [2/0] [__graph_breaks]
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] Graph break in user code at /home/user/anaconda3/envs/python310/lib/python3.10/site-packages/einops/einops.py:310
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] Graph Break Reason: Unsupported method call
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] Explanation: Dynamo does not know how to trace method `symmetric_difference` of class `type`
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] Hint: Avoid calling `type.symmetric_difference` in your code.
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] Hint: Please report an issue to PyTorch.
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks]
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] Developer debug context: call_method BuiltinVariable(set) symmetric_difference [SetVariable(), SetVariable()] {}
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks]
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] User code traceback:
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/workspace/compile_debug.py", line 410, in <module>
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] main(0, world_size, args)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/workspace/compile_debug.py", line 387, in main
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] out = model(x, cond, cond_mask)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return forward_call(*args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 856, in forward
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] output = self._fsdp_wrapped_module(*args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return self._call_impl(*args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return forward_call(*args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/workspace/compile_debug.py", line 307, in forward
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] x = layer(x, cond, cond_mask)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return self._call_impl(*args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return forward_call(*args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 856, in forward
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] output = self._fsdp_wrapped_module(*args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return self._call_impl(*args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return forward_call(*args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks]
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] ========== most recent `torch.compile` tracing attempt started here ==========
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks]
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/workspace/compile_debug.py", line 151, in forward
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] x = layer(x, *args, **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/workspace/compile_debug.py", line 75, in forward
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] q = rearrange(q, "b n (h d) -> b h n d", h=self.n_heads)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/einops/einops.py", line 591, in rearrange
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return reduce(tensor, pattern, reduction="rearrange", **axes_lengths)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/einops/einops.py", line 522, in reduce
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] recipe = _prepare_transformation_recipe(pattern, reduction, axes_names=tuple(axes_lengths), ndim=len(shape))
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/polyfills/__init__.py", line 140, in getattr_and_trace
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] return fn(*args[2:], **kwargs)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/einops/einops.py", line 310, in _prepare_transformation_recipe
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] difference = set.symmetric_difference(left.identifiers, rght.identifiers)
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks]
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks] NOTE: the most recent `torch.compile` tracing attempt might not be where you applied `torch.compile`! This is due to how graph breaks are implemented - the optimized code object returned by Dynamo will call another Dynamo-generated resume function and tracing is re-enabled by calling the resume function as a normal Python function, which Dynamo intercepts as a top-level frame.
> [rank0]:V0425 17:17:41.382000 814000 site-packages/torch/_dynamo/symbolic_convert.py:556] [3/0] [__graph_breaks]
> [rank0]:V0425 17:18:02.286000 814000 site-packages/torch/_dynamo/guards.py:2997] [5/1] [__recompiles] Recompiling function forward in /home/user/workspace/compile_debug.py:62
> [rank0]:V0425 17:18:02.286000 814000 site-packages/torch/_dynamo/guards.py:2997] [5/1] [__recompiles] triggered by the following guard failure(s):
> [rank0]:V0425 17:18:02.286000 814000 site-packages/torch/_dynamo/guards.py:2997] [5/1] [__recompiles] - 5/0: tensor 'x' stride mismatch at index 1. expected 4096, actual 1
> [rank0]:V0425 17:18:04.998000 814000 site-packages/torch/_dynamo/guards.py:2997] [4/1] [__recompiles] Recompiling function forward in /home/user/workspace/compile_debug.py:223
> [rank0]:V0425 17:18:04.998000 814000 site-packages/torch/_dynamo/guards.py:2997] [4/1] [__recompiles] triggered by the following guard failure(s):
> [rank0]:V0425 17:18:04.998000 814000 site-packages/torch/_dynamo/guards.py:2997] [4/1] [__recompiles] - 4/0: tensor 'x' size mismatch at index 1. expected 128, actual 256
> [rank0]:V0425 17:18:19.285000 814000 site-packages/torch/_dynamo/guards.py:2997] [4/2] [__recompiles] Recompiling function forward in /home/user/workspace/compile_debug.py:223
> [rank0]:V0425 17:18:19.285000 814000 site-packages/torch/_dynamo/guards.py:2997] [4/2] [__recompiles] triggered by the following guard failure(s):
> [rank0]:V0425 17:18:19.285000 814000 site-packages/torch/_dynamo/guards.py:2997] [4/2] [__recompiles] - 4/1: tensor 'x' size mismatch at index 2. expected 64, actual 32
> [rank0]:V0425 17:18:19.285000 814000 site-packages/torch/_dynamo/guards.py:2997] [4/2] [__recompiles] - 4/0: tensor 'x' size mismatch at index 1. expected 128, actual 256
> /home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/autograd/graph.py:824: UserWarning: Error detected in NativeGroupNormBackward0. Traceback of forward call that caused the error:
> File "/home/user/workspace/compile_debug.py", line 233, in forward
> h = self.out_layers[i](h)
> (Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:122.)
> return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
> 0%| | 0/200 [00:40<?, ?it/s]
> [rank0]: Traceback (most recent call last):
> [rank0]: File "/home/user/workspace/compile_debug.py", line 410, in <module>
> [rank0]: main(0, world_size, args)
> [rank0]: File "/home/user/workspace/compile_debug.py", line 387, in main
> [rank0]: out = model(x, cond, cond_mask)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
> [rank0]: return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 663, in _fn
> [rank0]: raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
> [rank0]: return fn(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]: return forward_call(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 856, in forward
> [rank0]: output = self._fsdp_wrapped_module(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
> [rank0]: return self._call_impl(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]: return forward_call(*args, **kwargs)
> [rank0]: File "/home/user/workspace/compile_debug.py", line 307, in forward
> [rank0]: x = layer(x, cond, cond_mask)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
> [rank0]: return self._call_impl(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]: return forward_call(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 856, in forward
> [rank0]: output = self._fsdp_wrapped_module(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
> [rank0]: return self._call_impl(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]: return forward_call(*args, **kwargs)
> [rank0]: File "/home/user/workspace/compile_debug.py", line 151, in forward
> [rank0]: x = layer(x, *args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
> [rank0]: return self._call_impl(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
> [rank0]: return forward_call(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1432, in __call__
> [rank0]: return self._torchdynamo_orig_callable(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1213, in __call__
> [rank0]: result = self._inner_convert(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 598, in __call__
> [rank0]: return _compile(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1059, in _compile
> [rank0]: guarded_code = compile_inner(code, one_graph, hooks, transform)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
> [rank0]: return function(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 761, in compile_inner
> [rank0]: return _compile_inner(code, one_graph, hooks, transform)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 797, in _compile_inner
> [rank0]: out_code = transform_code_object(code, transform)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
> [rank0]: transformations(instructions, code_options)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 257, in _fn
> [rank0]: return fn(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in transform
> [rank0]: tracer.run()
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run
> [rank0]: super().run()
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
> [rank0]: while self.step():
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
> [rank0]: self.dispatch_table[inst.opcode](self, inst)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3701, in RETURN_VALUE
> [rank0]: self._return(inst)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3686, in _return
> [rank0]: self.output.compile_subgraph(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1144, in compile_subgraph
> [rank0]: self.compile_and_call_fx_graph(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1437, in compile_and_call_fx_graph
> [rank0]: compiled_fn = self.call_user_compiler(gm)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1487, in call_user_compiler
> [rank0]: return self._call_user_compiler(gm)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1544, in _call_user_compiler
> [rank0]: raise BackendCompilerFailed(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1519, in _call_user_compiler
> [rank0]: compiled_fn = compiler_fn(gm, self.example_inputs())
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
> [rank0]: compiled_gm = compiler_fn(gm, example_inputs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/__init__.py", line 2347, in __call__
> [rank0]: return compile_fx(model_, inputs_, config_patches=self.config)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2089, in compile_fx
> [rank0]: return aot_autograd(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 101, in __call__
> [rank0]: cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1160, in aot_module_simplified
> [rank0]: compiled_fn = AOTAutogradCache.load(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 775, in load
> [rank0]: compiled_fn = dispatch_and_compile()
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1145, in dispatch_and_compile
> [rank0]: compiled_fn, _ = create_aot_dispatcher_function(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
> [rank0]: return _create_aot_dispatcher_function(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
> [rank0]: compiled_fn, fw_metadata = compiler_fn(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 783, in aot_dispatch_autograd
> [rank0]: fx_g, joint_inputs, maybe_subclass_meta = aot_dispatch_autograd_graph(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 318, in aot_dispatch_autograd_graph
> [rank0]: fx_g = _create_graph(joint_fn_to_trace, updated_joint_inputs, aot_config=aot_config)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 55, in _create_graph
> [rank0]: fx_g = make_fx(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2240, in wrapped
> [rank0]: return make_fx_tracer.trace(f, *args)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2178, in trace
> [rank0]: return self._trace_inner(f, *args)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2149, in _trace_inner
> [rank0]: t = dispatch_trace(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_compile.py", line 51, in inner
> [rank0]: return disable_fn(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
> [rank0]: return fn(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1174, in dispatch_trace
> [rank0]: graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
> [rank0]: return fn(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 838, in trace
> [rank0]: (self.create_arg(fn(*args)),),
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 692, in flatten_fn
> [rank0]: tree_out = root_fn(*tree_args)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1229, in wrapped
> [rank0]: out = f(*tensors) # type:ignore[call-arg]
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 717, in inner_fn
> [rank0]: outs = fn(*args)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 668, in joint_helper
> [rank0]: return _functionalized_f_helper(primals, tangents)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 416, in _functionalized_f_helper
> [rank0]: f_outs = fn(*f_args)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 283, in inner_fn_with_anomaly
> [rank0]: return inner_fn(*args)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 268, in inner_fn
> [rank0]: backward_out = torch.autograd.grad(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/autograd/__init__.py", line 451, in grad
> [rank0]: return handle_torch_function(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/overrides.py", line 1721, in handle_torch_function
> [rank0]: result = mode.__torch_function__(public_api, types, args, kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1277, in __torch_function__
> [rank0]: return func(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/autograd/__init__.py", line 502, in grad
> [rank0]: result = _engine_run_backward(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
> [rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_subclasses/functional_tensor.py", line 525, in __torch_dispatch__
> [rank0]: outs_unwrapped = func._op_dk(
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/utils/_stats.py", line 27, in wrapper
> [rank0]: return fn(*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1379, in __torch_dispatch__
> [rank0]: return proxy_call(self, func, self.pre_dispatch, args, kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 791, in proxy_call
> [rank0]: r = maybe_handle_decomp(proxy_mode, func, args, kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2308, in maybe_handle_decomp
> [rank0]: out = CURRENT_DECOMPOSITION_TABLE[op](*args, **kwargs)
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_decomp/decompositions.py", line 84, in inner
> [rank0]: r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs))
> [rank0]: File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_decomp/decompositions.py", line 1531, in native_group_norm_backward
> [rank0]: cpg, _rem = divmod(C, group)
> [rank0]: torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
> [rank0]: TypeError: unsupported operand type(s) for divmod(): 'SymInt' and 'int'
### Versions
Collecting environment information...
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.210-39.1.pagevecsize-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 560.35.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 230
On-line CPU(s) list: 0-229
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7702 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 230
Stepping: 0
BogoMIPS: 4000.52
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities
Virtualization: AMD-V
L1d cache: 14.4 MiB (230 instances)
L1i cache: 14.4 MiB (230 instances)
L2 cache: 115 MiB (230 instances)
L3 cache: 3.6 GiB (230 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-56
NUMA node1 CPU(s): 57-113
NUMA node2 CPU(s): 114-170
NUMA node3 CPU(s): 171-229
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==5.0.4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] open_clip_torch==2.29.0
[pip3] pytorch-warmup==0.1.1
[pip3] torch==2.7.0+cu128
[pip3] torch-model-archiver==0.12.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torch-workflow-archiver==0.2.15
[pip3] torchaudio==2.7.0+cu128
[pip3] torchmetrics==1.6.0
[pip3] torchsde==0.2.6
[pip3] torchserve==0.12.0
[pip3] torchvision==0.22.0+cu128
[pip3] triton==3.3.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] open-clip-torch 2.29.0 pypi_0 pypi
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] torch 2.7.0+cu128 pypi_0 pypi
[conda] torch-model-archiver 0.12.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torch-workflow-archiver 0.2.15 pypi_0 pypi
[conda] torchaudio 2.7.0+cu128 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchserve 0.12.0 pypi_0 pypi
[conda] torchvision 0.22.0+cu128 pypi_0 pypi
[conda] triton 3.3.0 pypi_0 pypi
cc @mruberry @jbschlosser @walterddr @mikaylagawarecki @chauhang @penguinwu
| true
|
3,020,195,447
|
[WIP] New Win Arm64 Runners - User pre installed Visual Studio
|
iremyux
|
open
|
[
"open source",
"topic: not user facing",
"ciflow/binaries_wheel",
"ciflow/binaries_libtorch"
] | 1
|
COLLABORATOR
| null | true
|
3,020,123,984
|
write a custom ViewAndMutationmeta.__repr__
|
bdhirsh
|
open
|
[
"triaged",
"oncall: pt2"
] | 0
|
CONTRIBUTOR
|
`ViewAndMutationMeta` can hold a tensor subclass today, when torch.compile is used with tensor subclass graph inputs/outputs.
We also heavily log our `ViewAndMutationMeta` object during compilation when we are running compile with tlparse.
This can be a problem if the subclass we are tracing with either:
(1) does not have a ` __repr__` defined
(2) has a `__repr__`, but the repr function is not resilient in the case where the subclass's inner tensors are fake tensors (e.g. if the repr has data-dependent code in it
The real fix here is generally to require "pt2-friendly" tensor subclasses to have a repr that works in the case where they hold inner fake tensors.
Even so, we should protect against this case: one way would be to define a custom `__repr__` on `ViewAndMutationMeta` that wraps any subclass printing in exception handling.
cc @chauhang @penguinwu
| true
|
3,019,921,415
|
GH200/GB200 NCCL Build Pytorch
|
johnnynunez
|
open
|
[
"module: build",
"triaged",
"module: nccl",
"module: third_party",
"has workaround"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```bash
[879/3160] Building CXX object c10/test/CMakeFiles/c10_generic_math_test.dir/util/generic_math_test.cpp.o
[880/3160] Building CXX object c10/test/CMakeFiles/c10_tempfile_test.dir/util/tempfile_test.cpp.o
[881/3160] Building C object sleef/src/libm/CMakeFiles/sleefpurec_scalar.dir/sleefsimdsp.c.o
[882/3160] Building CXX object c10/test/CMakeFiles/c10_bit_cast_test.dir/util/bit_cast_test.cpp.o
[883/3160] Building C object sleef/src/libm/CMakeFiles/sleefdetpurec_scalar.dir/sleefsimddp.c.o
[884/3160] Regenerating version file...
[885/3160] Building C object sleef/src/libm/CMakeFiles/sleefdetsvenofma.dir/sleefsimddp.c.o
[886/3160] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_sve.dir/common_sve.cc.o
[887/3160] Building C object sleef/src/libm/CMakeFiles/sleefpurecfma_scalar.dir/sleefsimddp.c.o
[888/3160] Building CXX object c10/test/CMakeFiles/c10_lazy_test.dir/util/lazy_test.cpp.o
[889/3160] Building C object sleef/src/libm/CMakeFiles/sleefdetadvsimd.dir/sleefsimddp.c.o
[890/3160] Building C object sleef/src/libm/CMakeFiles/sleefsvenofma.dir/sleefsimdsp.c.o
[891/3160] Building CXX object c10/test/CMakeFiles/c10_ssize_test.dir/util/ssize_test.cpp.o
[892/3160] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor/defs.cc.o
[893/3160] Building CXX object c10/test/CMakeFiles/c10_NetworkFlow_test.dir/util/NetworkFlow_test.cpp.o
[894/3160] Building CXX object c10/CMakeFiles/c10.dir/util/signal_handler.cpp.o
[895/3160] Building CXX object c10/test/CMakeFiles/c10_Scalar_test.dir/core/Scalar_test.cpp.o
[896/3160] Building CXX object c10/test/CMakeFiles/c10_complex_test.dir/util/complex_test.cpp.o
[897/3160] Building CXX object c10/cuda/test/CMakeFiles/c10_cuda_CUDATest.dir/impl/CUDATest.cpp.o
[898/3160] Building C object sleef/src/libm/CMakeFiles/sleefpurec_scalar.dir/sleefsimddp.c.o
[899/3160] Building C object sleef/src/libm/CMakeFiles/sleefadvsimd.dir/sleefsimdsp.c.o
[900/3160] Building CXX object c10/test/CMakeFiles/c10_ThreadLocal_test.dir/util/ThreadLocal_test.cpp.o
[901/3160] Building CXX object c10/CMakeFiles/c10.dir/util/tempfile.cpp.o
[902/3160] Building C object sleef/src/libm/CMakeFiles/sleefpurecfma_scalar.dir/sleefsimdsp.c.o
[903/3160] Building C object sleef/src/libm/CMakeFiles/sleefsve.dir/sleefsimdsp.c.o
[904/3160] Building C object sleef/src/libm/CMakeFiles/dispscalar_obj.dir/dispscalar.c.o
[905/3160] Generating sources
[906/3160] Linking CXX shared library lib/libc10.so
[907/3160] Building CXX object caffe2/CMakeFiles/vec_test_all_types_SVE256.dir/__/aten/src/ATen/native/quantized/AffineQuantizerBase.cpp.o
[908/3160] Building C object sleef/src/libm/CMakeFiles/sleefsve.dir/sleefsimddp.c.o
[909/3160] Linking CXX executable bin/c10_Device_test
[910/3160] Linking CXX executable bin/c10_DeviceGuard_test
[911/3160] Linking CXX executable bin/c10_CompileTimeFunctionPointer_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[912/3160] Linking CXX executable bin/c10_SymInt_test
[913/3160] Linking CXX executable bin/c10_StreamGuard_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[914/3160] Linking CXX executable bin/c10_Scalar_test
[915/3160] Linking CXX executable bin/c10_ArrayRef_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[916/3160] Linking CXX executable bin/c10_InlineDeviceGuard_test
[917/3160] Linking CXX executable bin/c10_InlineStreamGuard_test
[918/3160] Linking CXX executable bin/c10_cow_test
[919/3160] Linking CXX executable bin/c10_Bitset_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[920/3160] Linking CXX executable bin/c10_ConstexprCrc_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[921/3160] Linking CXX executable bin/c10_DeadlockDetection_test
[922/3160] Linking CXX executable bin/c10_DispatchKeySet_test
[923/3160] Linking CXX executable bin/c10_SizesAndStrides_test
[924/3160] Linking CXX executable bin/c10_Half_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[925/3160] Linking CXX executable bin/c10_Synchronized_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[926/3160] Linking CXX executable bin/c10_TypeIndex_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[927/3160] Linking CXX executable bin/c10_Metaprogramming_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[928/3160] Linking CXX executable bin/c10_ThreadLocal_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[929/3160] Linking CXX executable bin/c10_TypeList_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[930/3160] Linking CXX executable bin/c10_complex_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[931/3160] Linking CXX executable bin/c10_NetworkFlow_test
[932/3160] Linking CXX executable bin/c10_irange_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[933/3160] Linking CXX executable bin/c10_bit_cast_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[934/3160] Linking CXX executable bin/c10_LeftRight_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[935/3160] Linking CXX executable bin/c10_accumulate_test
[936/3160] Linking CXX executable bin/c10_TypeTraits_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[937/3160] Linking CXX executable bin/c10_flags_test
[938/3160] Linking CXX executable bin/c10_error_test
[939/3160] Linking CXX executable bin/c10_exception_test
[940/3160] Linking CXX executable bin/c10_lazy_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[941/3160] Linking CXX executable bin/c10_generic_math_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[942/3160] Linking CXX executable bin/c10_ssize_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[943/3160] Linking CXX executable bin/c10_registry_test
[944/3160] Linking CXX executable bin/c10_string_view_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[945/3160] Linking CXX executable bin/c10_tempfile_test
[946/3160] Linking CXX executable bin/c10_string_util_test
[947/3160] Building C object sleef/src/libm/CMakeFiles/sleefadvsimd.dir/sleefsimddp.c.o
[948/3160] Linking CXX executable bin/c10_intrusive_ptr_benchmark
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[949/3160] Building CXX object c10/test/CMakeFiles/c10_bfloat16_test.dir/util/bfloat16_test.cpp.o
[950/3160] Building C object sleef/src/libm/CMakeFiles/sleefsvenofma.dir/sleefsimddp.c.o
[951/3160] Linking CXX executable bin/c10_bfloat16_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[952/3160] Linking C static library sleef/lib/libsleef.a
[953/3160] Building CXX object c10/test/CMakeFiles/c10_logging_test.dir/util/logging_test.cpp.o
[954/3160] Linking CXX executable bin/c10_logging_test
[955/3160] Building CXX object caffe2/CMakeFiles/vec_test_all_types_DEFAULT.dir/__/aten/src/ATen/native/quantized/AffineQuantizerBase.cpp.o
[956/3160] Building CXX object c10/test/CMakeFiles/c10_complex_math_test.dir/util/complex_math_test.cpp.o
[957/3160] Building CXX object c10/test/CMakeFiles/c10_typeid_test.dir/util/typeid_test.cpp.o
[958/3160] Linking CXX executable bin/c10_complex_math_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[959/3160] Linking CXX executable bin/c10_typeid_test
[960/3160] Building CXX object c10/cuda/CMakeFiles/c10_cuda.dir/CUDAMallocAsyncAllocator.cpp.o
[961/3160] Building CXX object third_party/kineto/libkineto/CMakeFiles/kineto_base.dir/src/CuptiActivityProfiler.cpp.o
[962/3160] Linking CXX static library lib/libkineto.a
[963/3160] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor/old.cc.o
[964/3160] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/schema.cc.o
[965/3160] Building CXX object c10/test/CMakeFiles/c10_ordered_preserving_dict_test.dir/util/ordered_preserving_dict_test.cpp.o
[966/3160] Linking CXX executable bin/c10_ordered_preserving_dict_test
[967/3160] Generating /opt/pytorch/torch/_C/__init__.pyi, /opt/pytorch/torch/_C/_VariableFunctions.pyi, /opt/pytorch/torch/nn/functional.pyi
[968/3160] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_sve.dir/embedding_lookup_idx_sve.cc.o
[969/3160] Linking CXX static library lib/libCaffe2_perfkernels_sve.a
[970/3160] Generating /opt/pytorch/torch/csrc/autograd/generated/Functions.cpp, /opt/pytorch/torch/csrc/autograd/generated/ViewFuncs.cpp, /opt/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp, /opt/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp, /opt/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp, /opt/pytorch/torch/csrc/autograd/generated/VariableType_3.cpp, /opt/pytorch/torch/csrc/autograd/generated/VariableType_4.cpp, /opt/pytorch/torch/csrc/autograd/generated/TraceType_0.cpp, /opt/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp, /opt/pytorch/torch/csrc/autograd/generated/TraceType_2.cpp, /opt/pytorch/torch/csrc/autograd/generated/TraceType_3.cpp, /opt/pytorch/torch/csrc/autograd/generated/TraceType_4.cpp, /opt/pytorch/torch/csrc/autograd/generated/ADInplaceOrViewType_0.cpp, /opt/pytorch/torch/csrc/autograd/generated/ADInplaceOrViewType_1.cpp, /opt/pytorch/torch/csrc/inductor/aoti_torch/generated/c_shim_cpu.cpp, /opt/pytorch/torch/csrc/lazy/generated/LazyNativeFunctions.cpp, /opt/pytorch/torch/csrc/lazy/generated/RegisterAutogradLazy.cpp, /opt/pytorch/torch/csrc/lazy/generated/RegisterLazy.cpp, /opt/pytorch/torch/csrc/autograd/generated/Functions.h, /opt/pytorch/torch/csrc/autograd/generated/variable_factories.h, /opt/pytorch/torch/csrc/autograd/generated/ViewFuncs.h, /opt/pytorch/torch/csrc/autograd/generated/VariableType.h, /opt/pytorch/torch/csrc/lazy/generated/LazyIr.h, /opt/pytorch/torch/csrc/lazy/generated/LazyNonNativeIr.h, /opt/pytorch/torch/csrc/lazy/generated/LazyNativeFunctions.h, /opt/pytorch/torch/csrc/autograd/generated/python_functions_0.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_functions_1.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_functions_2.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_functions_3.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_functions_4.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_torch_functions_0.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_torch_functions_1.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_nn_functions.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_fft_functions.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_linalg_functions.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_nested_functions.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_sparse_functions.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_special_functions.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_return_types.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_enum_tag.cpp, /opt/pytorch/torch/csrc/autograd/generated/python_functions.h, /opt/pytorch/torch/csrc/autograd/generated/python_return_types.h, /opt/pytorch/torch/testing/_internal/generated/annotated_fn_args.py, /opt/pytorch/torch/csrc/inductor/aoti_torch/generated/c_shim_cuda.cpp
[971/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ParallelNative.cpp.o
[972/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/SequenceNumber.cpp.o
[973/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/cpu/Utils.cpp.o
[974/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/FuncTorchTLS.cpp.o
[975/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/cpu/FlushDenormal.cpp.o
[976/3160] Building CXX object c10/test/CMakeFiles/c10_optional_test.dir/util/optional_test.cpp.o
[977/3160] Linking CXX executable bin/c10_optional_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[978/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/LegacyVmapMode.cpp.o
[979/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/AccumulateType.cpp.o
[980/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Dispatch.cpp.o
[981/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ParallelCommon.cpp.o
[982/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/PythonTorchFunctionTLS.cpp.o
[983/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/SavedTensorHooks.cpp.o
[984/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/CPUGuardImpl.cpp.o
[985/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/EmptyTensor.cpp.o
[986/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/IPUHooksInterface.cpp.o
[987/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/DeviceAccelerator.cpp.o
[988/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/CUDAHooksInterface.cpp.o
[989/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Context.cpp.o
[990/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/TensorMeta.cpp.o
[991/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Version.cpp.o
[992/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/MetaGuardImpl.cpp.o
[993/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/DynamicLibrary.cpp.o
[994/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/CPUGeneratorImpl.cpp.o
[995/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/XPUHooksInterface.cpp.o
[996/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/PrivateUse1HooksInterface.cpp.o
[997/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/MAIAHooksInterface.cpp.o
[998/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/TensorGeometry.cpp.o
[999/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/MemoryOverlap.cpp.o
[1000/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/MTIAHooksInterface.cpp.o
[1001/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/HIPHooksInterface.cpp.o
[1002/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/HPUHooksInterface.cpp.o
[1003/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ThreadLocalPythonObjects.cpp.o
[1004/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/LegacyBatchedTensorImpl.cpp.o
[1005/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/detail/MPSHooksInterface.cpp.o
[1006/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/FunctionalStorageImpl.cpp.o
[1007/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/TensorNames.cpp.o
[1008/3160] Building CUDA object c10/cuda/test/CMakeFiles/c10_cuda_CUDAAssertionsTest_multiple_writes_from_multiple_blocks.dir/impl/CUDAAssertionsTest_multiple_writes_from_multiple_blocks.cu.o
[1009/3160] Building CUDA object c10/cuda/test/CMakeFiles/c10_cuda_CUDAAssertionsTest_from_2_processes.dir/impl/CUDAAssertionsTest_from_2_processes.cu.o
[1010/3160] Building CUDA object c10/cuda/test/CMakeFiles/c10_cuda_CUDAAssertionsTest_multiple_writes_from_same_block.dir/impl/CUDAAssertionsTest_multiple_writes_from_same_block.cu.o
[1011/3160] Building CXX object c10/cuda/CMakeFiles/c10_cuda.dir/CUDACachingAllocator.cpp.o
[1012/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ExpandUtils.cpp.o
[1013/3160] Linking CXX shared library lib/libc10_cuda.so
Warning: Unused direct dependencies:
/lib/aarch64-linux-gnu/libm.so.6
[1014/3160] Building CUDA object c10/cuda/test/CMakeFiles/c10_cuda_CUDAAssertionsTest_multiple_writes_from_blocks_and_threads.dir/impl/CUDAAssertionsTest_multiple_writes_from_blocks_and_threads.cu.o
[1015/3160] Linking CXX executable bin/c10_cuda_CUDAAssertionsTest_from_2_processes
[1016/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/MapAllocator.cpp.o
[1017/3160] Linking CXX executable bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_blocks_and_threads
[1018/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ConjugateFallback.cpp.o
[1019/3160] Linking CXX executable bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_multiple_blocks
[1020/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/FunctionalizeFallbackKernel.cpp.o
[1021/3160] Linking CXX executable bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_same_block
[1022/3160] Linking CXX executable bin/c10_cuda_CUDATest
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
/usr/local/cuda/lib64/libcudart.so.12
[1023/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/ATenGeneral.cpp.o
[1024/3160] Building CUDA object c10/cuda/test/CMakeFiles/c10_cuda_CUDAAssertionsTest_catches_thread_and_block_and_device.dir/impl/CUDAAssertionsTest_catches_thread_and_block_and_device.cu.o
[1025/3160] Linking CXX executable bin/c10_cuda_CUDAAssertionsTest_catches_thread_and_block_and_device
[1026/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/TensorIndexing.cpp.o
[1027/3160] Building CUDA object c10/cuda/test/CMakeFiles/c10_cuda_CUDAAssertionsTest_catches_stream.dir/impl/CUDAAssertionsTest_catches_stream.cu.o
[1028/3160] Linking CXX executable bin/c10_cuda_CUDAAssertionsTest_catches_stream
[1029/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/ADInterpreters.cpp.o
[1030/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ScalarOps.cpp.o
[1031/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Utils.cpp.o
[1032/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/NestedTensorImpl.cpp.o
[1033/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/NamedTensorUtils.cpp.o
[1034/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/Dimname.cpp.o
[1035/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/CachedTensorUtils.cpp.o
[1036/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/TensorUtils.cpp.o
[1037/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/StorageUtils.cpp.o
[1038/3160] Building CUDA object c10/cuda/test/CMakeFiles/c10_cuda_CUDAAssertionsTest_1_var_test.dir/impl/CUDAAssertionsTest_1_var_test.cu.o
[1039/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/SparseCsrTensorImpl.cpp.o
[1040/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/DeprecatedTypePropertiesRegistry.cpp.o
[1041/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ParallelOpenMP.cpp.o
[1042/3160] Linking CXX executable bin/c10_cuda_CUDAAssertionsTest_1_var_test
[1043/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/DeprecatedTypeProperties.cpp.o
[1044/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/Range.cpp.o
[1045/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ParallelThreadPoolNative.cpp.o
[1046/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/LegacyVmapTransforms.cpp.o
[1047/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ThreadLocalState.cpp.o
[1048/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/Interpreter.cpp.o
[1049/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/Generator.cpp.o
[1050/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/Vitals.cpp.o
[1051/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/VmapInterpreter.cpp.o
[1052/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/NestedIntSymNodeImpl.cpp.o
[1053/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/FunctionalizeInterpreter.cpp.o
[1054/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/GeneratorForPrivateuseone.cpp.o
[1055/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchedTensorImpl.cpp.o
[1056/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/TensorIterator.cpp.o
[1057/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/FunctionalTensorWrapper.cpp.o
[1058/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/NamedTensor.cpp.o
[1059/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/blob.cpp.o
[1060/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/dispatch/ObservedOperators.cpp.o
[1061/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/DLConvertor.cpp.o
[1062/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/PlumbingHelper.cpp.o
[1063/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/BackendSelectFallbackKernel.cpp.o
[1064/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/FunctionalInverses.cpp.o
[1065/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/List.cpp.o
[1066/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/TensorWrapper.cpp.o
[1067/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/Dict.cpp.o
[1068/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/LegacyBatchedFallback.cpp.o
[1069/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/record_function.cpp.o
[1070/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/operator_name.cpp.o
[1071/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/VariableHooksInterface.cpp.o
[1072/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/register_symbols.cpp.o
[1073/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/Formatting.cpp.o
[1074/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/SparseTensorImpl.cpp.o
[1075/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/adaption.cpp.o
[1076/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/frontend/strtod.cpp.o
[1077/3160] Building CXX object c10/test/CMakeFiles/c10_intrusive_ptr_test.dir/util/intrusive_ptr_test.cpp.o
[1078/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/ZeroTensorFallback.cpp.o
[1079/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/Tensor.cpp.o
[1080/3160] Linking CXX executable bin/c10_intrusive_ptr_test
Warning: Unused direct dependencies:
/opt/pytorch/build/lib/libc10.so
[1081/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/PythonOpRegistrationTrampoline.cpp.o
[1082/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/dispatch/DispatchKeyExtractor.cpp.o
[1083/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/PythonFallbackKernel.cpp.o
[1084/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/TorchDispatchUtils.cpp.o
[1085/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchedFallback.cpp.o
[1086/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesDynamic.cpp.o
[1087/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/VmapModeRegistrations.cpp.o
[1088/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesHelper.cpp.o
[1089/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/frontend/error_report.cpp.o
[1090/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/VmapModeRegistrations.cpp.o
[1091/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/PyTorchOperatorHacks.cpp.o
[1092/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/class_type.cpp.o
[1093/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/LegacyVmapTransforms.cpp.o
[1094/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesActivation.cpp.o
[1095/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/LegacyBatchingRegistrations.cpp.o
[1096/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/boxing/KernelFunction.cpp.o
[1097/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/NamedRegistrations.cpp.o
[1098/3160] Building CXX object caffe2/CMakeFiles/vec_test_all_types_DEFAULT.dir/__/aten/src/ATen/test/vec_test_all_types.cpp.o
FAILED: caffe2/CMakeFiles/vec_test_all_types_DEFAULT.dir/__/aten/src/ATen/test/vec_test_all_types.cpp.o
/usr/bin/c++ -DAT_BUILD_ARM_VEC256_WITH_SLEEF -DCAFFE2_PERF_WITH_SVE=1 -DCPU_CAPABILITY=DEFAULT -DCPU_CAPABILITY_DEFAULT -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -I/opt/pytorch/build/aten/src -I/opt/pytorch/aten/src -I/opt/pytorch/build -I/opt/pytorch -I/opt/pytorch/cmake/../third_party/benchmark/include -I/opt/pytorch/third_party/onnx -I/opt/pytorch/build/third_party/onnx -I/opt/pytorch/nlohmann -I/opt/pytorch/build/include -I/opt/pytorch/build/caffe2/../aten/src -I/opt/pytorch/c10/.. -isystem /opt/pytorch/build/third_party/gloo -isystem /opt/pytorch/cmake/../third_party/gloo -isystem /opt/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /opt/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /opt/pytorch/cmake/../third_party/googletest/googletest/include -isystem /opt/pytorch/third_party/protobuf/src -isystem /opt/pytorch/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /opt/pytorch/INTERFACE -isystem /opt/pytorch/third_party/kleidiai -isystem /opt/pytorch/third_party/kleidiai/kai -isystem /opt/pytorch/third_party/kleidiai/kai/ukernels -isystem /opt/pytorch/third_party/kleidiai/kai/ukernels/matmul -isystem /opt/pytorch/third_party/kleidiai/kai/ukernels/matmul/matmul_clamp_f32_qai8dxp_qsi4cxp -isystem /opt/pytorch/third_party/kleidiai/kai/ukernels/matmul/matmul_clamp_f32_qsi8d32p_qsi4c32p -isystem /opt/pytorch/third_party/kleidiai/kai/ukernels/matmul/matmul_clamp_f32_qai8dxp_qsi4c32p -isystem /opt/pytorch/third_party/kleidiai/kai/ukernels/matmul/pack -isystem /opt/pytorch/third_party/nlohmann/include -isystem /opt/pytorch/third_party/googletest/googletest/include -isystem /opt/pytorch/third_party/googletest/googletest -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_PYTORCH_QNNPACK -DAT_BUILD_ARM_VEC256_WITH_SLEEF -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-dangling-reference -Wno-error=dangling-reference -Wno-error=redundant-move -Wno-stringop-overflow -DHAVE_SVE_CPU_DEFINITION -DHAVE_SVE256_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIE -march=native -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -D__NEON__ -O3 -Wno-ignored-qualifiers -MD -MT caffe2/CMakeFiles/vec_test_all_types_DEFAULT.dir/__/aten/src/ATen/test/vec_test_all_types.cpp.o -MF caffe2/CMakeFiles/vec_test_all_types_DEFAULT.dir/__/aten/src/ATen/test/vec_test_all_types.cpp.o.d -o caffe2/CMakeFiles/vec_test_all_types_DEFAULT.dir/__/aten/src/ATen/test/vec_test_all_types.cpp.o -c /opt/pytorch/aten/src/ATen/test/vec_test_all_types.cpp
during GIMPLE pass: sink
In file included from /opt/pytorch/third_party/googletest/googletest/include/gtest/gtest-printers.h:122,
from /opt/pytorch/third_party/googletest/googletest/include/gtest/gtest-matchers.h:49,
from /opt/pytorch/third_party/googletest/googletest/include/gtest/internal/gtest-death-test-internal.h:47,
from /opt/pytorch/third_party/googletest/googletest/include/gtest/gtest-death-test.h:43,
from /opt/pytorch/third_party/googletest/googletest/include/gtest/gtest.h:64,
from /opt/pytorch/aten/src/ATen/test/vec_test_all_types.h:6,
from /opt/pytorch/aten/src/ATen/test/vec_test_all_types.cpp:1:
/opt/pytorch/aten/src/ATen/test/vec_test_all_types.cpp: In member function ‘virtual void {anonymous}::VecConvertBFloat16_ExhaustiveToFloat_Test::TestBody()’:
/opt/pytorch/aten/src/ATen/test/vec_test_all_types.cpp:1770:10: internal compiler error: Segmentation fault
1770 | TEST(VecConvertBFloat16, ExhaustiveToFloat) {
| ^~~~~~~~~~~~~~~~~~
0xbc077f internal_error(char const*, ...)
???:0
0x104f390 bb_loop_depth(basic_block_def const*)
???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <file:///usr/share/doc/gcc-13/README.Bugs> for instructions.
[1099/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/MetaFallbackKernel.cpp.o
[1100/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/frontend/lexer.cpp.o
[1101/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/dynamic_type.cpp.o
[1102/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/function_schema.cpp.o
[1103/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/VariableFallbackKernel.cpp.o
[1104/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/AdaptiveMaxPooling2d.cpp.o
[1105/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/interned_strings.cpp.o
[1106/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesPooling.cpp.o
[1107/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/type_factory.cpp.o
[1108/3160] Building CXX object c10/test/CMakeFiles/c10_small_vector_test.dir/util/small_vector_test.cpp.o
[1109/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/BlasKernel.cpp.o
[1110/3160] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/version_converter/convert.cc.o
[1111/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/AdaptiveAveragePooling.cpp.o
[1112/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/AmpKernels.cpp.o
[1113/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/DynamicLayer.cpp.o
[1114/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/AutogradComposite.cpp.o
[1115/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesNorm.cpp.o
[1116/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/op_registration/op_registration.cpp.o
[1117/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/tensor_type.cpp.o
[1118/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/dispatch/OperatorEntry.cpp.o
[1119/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesConvolution.cpp.o
[1120/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesIndexing.cpp.o
[1121/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/ComparisonUtils.cpp.o
[1122/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/AdaptiveMaxPooling3d.cpp.o
[1123/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/library.cpp.o
[1124/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesBinaryOps.cpp.o
[1125/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/AdaptiveAveragePooling3d.cpp.o
[1126/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/union_type.cpp.o
[1127/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/AffineGridGenerator.cpp.o
[1128/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesFactory.cpp.o
[1129/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/dispatch/Dispatcher.cpp.o
[1130/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/Constraints.cpp.o
[1131/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/Activation.cpp.o
[1132/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/op_registration/infer_schema.cpp.o
[1133/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/frontend/function_schema_parser.cpp.o
[1134/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/custom_class.cpp.o
[1135/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesModules.cpp.o
[1136/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/AveragePool2d.cpp.o
[1137/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/ChanelShuffle.cpp.o
[1138/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/ivalue.cpp.o
[1139/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesRandomness.cpp.o
[1140/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesReduceOps.cpp.o
[1141/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/AveragePool3d.cpp.o
[1142/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesUnaryOps.cpp.o
[1143/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/type.cpp.o
[1144/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/Col2Im.cpp.o
[1145/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp.o
[1146/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/frontend/schema_type_parser.cpp.o
[1147/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/BinaryOps.cpp.o
[1148/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/CPUBlas.cpp.o
[1149/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/frontend/source_range.cpp.o
[1150/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesLoss.cpp.o
[1151/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesLinearAlgebra.cpp.o
[1152/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/Blas.cpp.o
[1153/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/Bucketization.cpp.o
[1154/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesScatterOps.cpp.o
[1155/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/CPUFallback.cpp.o
[1156/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesViews.cpp.o
[1157/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/BatchLinearAlgebraKernel.cpp.o
[1158/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/functorch/BatchRulesDecompositions.cpp.o
[1159/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/Convolution.cpp.o
[1160/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/BatchLinearAlgebra.cpp.o
[1161/3160] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/autocast_mode.cpp.o
[1162/3160] Building CXX object caffe2/CMakeFiles/vec_test_all_types_SVE256.dir/__/aten/src/ATen/test/vec_test_all_types.cpp.o
[1163/3160] Performing build step for 'nccl_external'
make -C src build BUILDDIR=/opt/pytorch/build/nccl
make[1]: Entering directory '/opt/pytorch/third_party/nccl/src'
NVCC_GENCODE is -gencode=arch=compute_90,code=sm_90 -gencode=arch=compute_100,code=sm_100 -gencode=arch=compute_101,code=sm_101 -gencode=arch=compute_120,code=sm_120
Generating nccl.h.in > /opt/pytorch/build/nccl/include/nccl.h
Generating nccl.pc.in > /opt/pytorch/build/nccl/lib/pkgconfig/nccl.pc
Compiling bootstrap.cc > /opt/pytorch/build/nccl/obj/bootstrap.o
Compiling channel.cc > /opt/pytorch/build/nccl/obj/channel.o
Compiling collectives.cc > /opt/pytorch/build/nccl/obj/collectives.o
Compiling debug.cc > /opt/pytorch/build/nccl/obj/debug.o
Compiling enqueue.cc > /opt/pytorch/build/nccl/obj/enqueue.o
Compiling group.cc > /opt/pytorch/build/nccl/obj/group.o
Compiling init.cc > /opt/pytorch/build/nccl/obj/init.o
Compiling init_nvtx.cc > /opt/pytorch/build/nccl/obj/init_nvtx.o
Compiling proxy.cc > /opt/pytorch/build/nccl/obj/proxy.o
Compiling transport.cc > /opt/pytorch/build/nccl/obj/transport.o
Compiling mnnvl.cc > /opt/pytorch/build/nccl/obj/mnnvl.o
Compiling graph/connect.cc > /opt/pytorch/build/nccl/obj/graph/connect.o
Compiling graph/paths.cc > /opt/pytorch/build/nccl/obj/graph/paths.o
Compiling graph/rings.cc > /opt/pytorch/build/nccl/obj/graph/rings.o
Compiling graph/search.cc > /opt/pytorch/build/nccl/obj/graph/search.o
Compiling graph/topo.cc > /opt/pytorch/build/nccl/obj/graph/topo.o
Compiling graph/trees.cc > /opt/pytorch/build/nccl/obj/graph/trees.o
Compiling graph/tuning.cc > /opt/pytorch/build/nccl/obj/graph/tuning.o
Compiling graph/xml.cc > /opt/pytorch/build/nccl/obj/graph/xml.o
Compiling misc/argcheck.cc > /opt/pytorch/build/nccl/obj/misc/argcheck.o
Compiling misc/cudawrap.cc > /opt/pytorch/build/nccl/obj/misc/cudawrap.o
Compiling misc/gdrwrap.cc > /opt/pytorch/build/nccl/obj/misc/gdrwrap.o
Compiling misc/ibvsymbols.cc > /opt/pytorch/build/nccl/obj/misc/ibvsymbols.o
Compiling misc/ibvwrap.cc > /opt/pytorch/build/nccl/obj/misc/ibvwrap.o
Compiling misc/param.cc > /opt/pytorch/build/nccl/obj/misc/param.o
Compiling misc/ipcsocket.cc > /opt/pytorch/build/nccl/obj/misc/ipcsocket.o
Compiling misc/nvmlwrap.cc > /opt/pytorch/build/nccl/obj/misc/nvmlwrap.o
Compiling misc/shmutils.cc > /opt/pytorch/build/nccl/obj/misc/shmutils.o
Compiling misc/socket.cc > /opt/pytorch/build/nccl/obj/misc/socket.o
Compiling misc/strongstream.cc > /opt/pytorch/build/nccl/obj/misc/strongstream.o
Compiling misc/utils.cc > /opt/pytorch/build/nccl/obj/misc/utils.o
Compiling transport/coll_net.cc > /opt/pytorch/build/nccl/obj/transport/coll_net.o
Compiling transport/generic.cc > /opt/pytorch/build/nccl/obj/transport/generic.o
Compiling transport/net.cc > /opt/pytorch/build/nccl/obj/transport/net.o
Compiling transport/net_ib.cc > /opt/pytorch/build/nccl/obj/transport/net_ib.o
Compiling transport/net_socket.cc > /opt/pytorch/build/nccl/obj/transport/net_socket.o
Compiling transport/nvls.cc > /opt/pytorch/build/nccl/obj/transport/nvls.o
Compiling transport/p2p.cc > /opt/pytorch/build/nccl/obj/transport/p2p.o
Compiling transport/profiler.cc > /opt/pytorch/build/nccl/obj/transport/profiler.o
Compiling transport/shm.cc > /opt/pytorch/build/nccl/obj/transport/shm.o
Compiling register/coll_reg.cc > /opt/pytorch/build/nccl/obj/register/coll_reg.o
Compiling register/register.cc > /opt/pytorch/build/nccl/obj/register/register.o
Compiling register/sendrecv_reg.cc > /opt/pytorch/build/nccl/obj/register/sendrecv_reg.o
Compiling plugin/net.cc > /opt/pytorch/build/nccl/obj/plugin/net.o
Compiling plugin/plugin_open.cc > /opt/pytorch/build/nccl/obj/plugin/plugin_open.o
Compiling plugin/profiler.cc > /opt/pytorch/build/nccl/obj/plugin/profiler.o
Compiling plugin/tuner.cc > /opt/pytorch/build/nccl/obj/plugin/tuner.o
Compiling plugin/net/net_v10.cc > /opt/pytorch/build/nccl/obj/plugin/net/net_v10.o
Compiling plugin/net/net_v6.cc > /opt/pytorch/build/nccl/obj/plugin/net/net_v6.o
Compiling plugin/net/net_v7.cc > /opt/pytorch/build/nccl/obj/plugin/net/net_v7.o
Compiling plugin/net/net_v8.cc > /opt/pytorch/build/nccl/obj/plugin/net/net_v8.o
Compiling plugin/net/net_v9.cc > /opt/pytorch/build/nccl/obj/plugin/net/net_v9.o
Compiling plugin/tuner/tuner_v2.cc > /opt/pytorch/build/nccl/obj/plugin/tuner/tuner_v2.o
Compiling plugin/tuner/tuner_v3.cc > /opt/pytorch/build/nccl/obj/plugin/tuner/tuner_v3.o
Compiling plugin/tuner/tuner_v4.cc > /opt/pytorch/build/nccl/obj/plugin/tuner/tuner_v4.o
Compiling plugin/profiler/profiler_v1.cc > /opt/pytorch/build/nccl/obj/plugin/profiler/profiler_v1.o
Compiling plugin/profiler/profiler_v2.cc > /opt/pytorch/build/nccl/obj/plugin/profiler/profiler_v2.o
Compiling plugin/profiler/profiler_v3.cc > /opt/pytorch/build/nccl/obj/plugin/profiler/profiler_v3.o
Compiling ras/client_support.cc > /opt/pytorch/build/nccl/obj/ras/client_support.o
Compiling ras/collectives.cc > /opt/pytorch/build/nccl/obj/ras/collectives.o
Compiling ras/peers.cc > /opt/pytorch/build/nccl/obj/ras/peers.o
Compiling ras/ras.cc > /opt/pytorch/build/nccl/obj/ras/ras.o
Compiling ras/rasnet.cc > /opt/pytorch/build/nccl/obj/ras/rasnet.o
Compiling enhcompat.cc > /opt/pytorch/build/nccl/obj/enhcompat.o
make[2]: Entering directory '/opt/pytorch/third_party/nccl/src/device'
Compiling ras/client.cc > /opt/pytorch/build/nccl/obj/ras/client.o
NVCC_GENCODE is -gencode=arch=compute_90,code=sm_90 -gencode=arch=compute_100,code=sm_100 -gencode=arch=compute_101,code=sm_101 -gencode=arch=compute_120,code=sm_120
NVCC_GENCODE is -gencode=arch=compute_90,code=sm_90 -gencode=arch=compute_100,code=sm_100 -gencode=arch=compute_101,code=sm_101 -gencode=arch=compute_120,code=sm_120
Dependencies src/device/common.cu
Dependencies /opt/pytorch/build/nccl/obj/device/gensrc/all_gather.cu
Dependencies /opt/pytorch/build/nccl/obj/device/gensrc/broadcast.cu
Dependencies /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce.cu
Dependencies src/device/onerank.cu
Dependencies /opt/pytorch/build/nccl/obj/device/gensrc/reduce.cu
Dependencies /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter.cu
Dependencies /opt/pytorch/build/nccl/obj/device/gensrc/sendrecv.cu
Dependencies /opt/pytorch/build/nccl/obj/device/gensrc/host_table.cc
Dependencies /opt/pytorch/build/nccl/obj/device/gensrc/device_table.cu
Linking ncclras > /opt/pytorch/build/nccl/bin/ncclras
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/host_table.cc
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_i32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_i64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_premulsum_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_premulsum_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_premulsum_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_minmax_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_premulsum_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_premulsum_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_premulsum_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_premulsum_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_premulsum_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_prod_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_prod_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_prod_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_premulsum_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_prod_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_prod_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_prod_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_prod_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_prod_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_prod_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sum_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sum_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sum_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_gather.cu
Compiling src/device/onerank.cu
Compiling src/device/common.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/device_table.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/sendrecv.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sumpostdiv_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sumpostdiv_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sum_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sum_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sumpostdiv_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sum_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sum_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sum_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sum_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sum_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sum_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_sum_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sumpostdiv_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sum_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sumpostdiv_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sumpostdiv_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sum_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sum_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sum_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sum_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_i32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_i64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_scatter_sum_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_minmax_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_premulsum_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_premulsum_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_premulsum_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_premulsum_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_premulsum_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_premulsum_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_premulsum_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_premulsum_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_premulsum_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_prod_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_prod_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_prod_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_prod_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_prod_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_prod_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_prod_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_prod_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_prod_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sum_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sum_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sum_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sum_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sum_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sum_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sum_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sum_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sum_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sumpostdiv_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sumpostdiv_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/all_reduce_sumpostdiv_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/broadcast.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_minmax_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_minmax_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_minmax_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_minmax_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_minmax_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_minmax_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_minmax_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_minmax_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_minmax_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_premulsum_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_premulsum_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_premulsum_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_premulsum_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_premulsum_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_premulsum_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_premulsum_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_premulsum_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_premulsum_u8.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_prod_bf16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_prod_f16.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_prod_f32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_prod_f64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_prod_f8e4m3.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_prod_f8e5m2.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_prod_u32.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_prod_u64.cu
Compiling /opt/pytorch/build/nccl/obj/device/gensrc/reduce_prod_u8.cu
make[2]: Leaving directory '/opt/pytorch/third_party/nccl/src/device'
Linking libnccl.so.2.26.2 > /opt/pytorch/build/nccl/lib/libnccl.so.2.26.2
Archiving libnccl_static.a > /opt/pytorch/build/nccl/lib/libnccl_static.a
make[1]: Leaving directory '/opt/pytorch/third_party/nccl/src'
ninja: build stopped: subcommand failed.
The command '/bin/sh -c /tmp/pytorch/install.sh || /tmp/pytorch/build.sh' returned a non-zero code: 1
```
### Versions
2.7.0 build from source:
Ubuntu 24.04 Python 3.12, CUDA 12.8.1- sbsa drivers
Docker: https://github.com/dusty-nv/jetson-containers/blob/master/packages/pytorch/Dockerfile
scripts: https://github.com/dusty-nv/jetson-containers/blob/master/packages/pytorch/build.sh
config.py: https://github.com/dusty-nv/jetson-containers/blob/master/packages/pytorch/config.py
cc @malfet @seemethere
| true
|
3,019,802,239
|
IGNORE: Testing OIDC
|
zxiiro
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
This reverts commit 8313bc27f2e1625a16622cb1d88be40c163e4959.
Fixes #ISSUE_NUMBER
| true
|
3,019,768,833
|
Tighten tolerance of test_vmapvjp_linalg_tensorsolve_cpu_float32
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 11
|
COLLABORATOR
|
With the optimzation of `solve` using a transposed input this test fails to meet these tolerances but passes without.
@pytorchbot topic: not user facing
| true
|
3,019,615,353
|
[ROCm] Update CUDAPluggableAllocator.h
|
amd-sriram
|
closed
|
[
"oncall: distributed",
"module: rocm",
"module: cpu",
"module: mkldnn",
"open source",
"release notes: quantization",
"release notes: rocm",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo",
"release notes: inductor (aoti)"
] | 4
|
CONTRIBUTOR
|
Altering the flag to use the correct streamType in CUDAPluggableAllocator class for ROCm gpu. The flag TORCH_HIP_VERSION does not work for ROCm as intended. This flag is replaced with USE_ROCM. This is impacting Distributed Fused Adam in Rocm/APEX when using nccl_ub feature. This has been tested with rocm/apex.
See PR https://github.com/ROCm/apex/pull/184
Related Commit - https://github.com/ROCm/apex/commit/6fd8b50f5c913765a060c1628ead47049a1f7d4c
https://github.com/ROCm/pytorch/commit/39a799fc283eb84ea5df842fe56b17289b49c914 - rocm/pytorch [release/2.7]
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv @voznesenskym @penguinwu @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,019,611,575
|
Raise an Error when File Not Found in `torch.jit.load()`
|
ILCSFNO
|
open
|
[
"oncall: jit",
"module: error checking",
"actionable"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The doc of [torch.jit.load()](https://pytorch.org/docs/stable/generated/torch.jit.load.html#torch-jit-load) shows its description as below:
https://github.com/pytorch/pytorch/blob/ad81eeb7c7c906e0cdd04a5cc8fdb9592281c317/torch/jit/_serialization.py#L105-L107
Tried repro below:
### Repro
```python
import torch
model = torch.nn.Linear(5, 10)
script = torch.jit.script(model)
torch.jit.save(script, 'model.pt')
torch.jit.load('model.pt', _extra_files={'extra_file.txt': 'This is an extra file.'})
```
### Output
```text
RecursiveScriptModule(original_name=Linear)
```
It showed that the key of `_extra_files` can be non-exist?
To find the reason, I find something here:
It checks the key of `_extra_files` here: (outside)
https://github.com/pytorch/pytorch/blob/ad81eeb7c7c906e0cdd04a5cc8fdb9592281c317/torch/csrc/jit/serialization/import.cpp#L257-L264
And then inside, it check in such process: (inside)
https://github.com/pytorch/pytorch/blob/ad81eeb7c7c906e0cdd04a5cc8fdb9592281c317/caffe2/serialize/inline_container.cc#L298-L318
It shows that when FILE_NOT_FOUND, it returns false. That's alright.
So it's in the outside that should check when return is false, suggest to raise an error about this.
Thanks for noting.
### Versions
Nightly
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @malfet
| true
|
3,019,519,917
|
Fix instantiate_device_type_tests() for 3rd-party devices
|
wizzniu
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
For 3rd-party devices now, `` instantiate_device_type_tests()`` with explicitly passing ``str`` obj (rather than `List[str]/Tuple[str]`) to argument ``only_for`` or ``except_for`` would causes unexpected results.
For example, if calling ``instantiate_device_type_tests(TestXXX, globals(), only_for="cpu")``, then it goes into [filter_desired_device_types()](https://github.com/pytorch/pytorch/blob/f38dae76ee8dccd60f99bbddb48f2520f436fa1a/torch/testing/_internal/common_device_type.py#L729) and results in ``only_for=['c', 'p', 'u']`` because ``only_for`` we passed is a "cpu" string.
This PR fixes the above unexpected behavior for ``str`` case.
cc @albanD
| true
|
3,019,487,144
|
Add description of several params in the basic usage of `torch.min()`, `torch.max()`, `torch.all()` and `torch.any()`
|
ILCSFNO
|
open
|
[
"module: docs",
"triaged",
"actionable"
] | 3
|
CONTRIBUTOR
|
### 📚 The doc issue
The doc of [torch.min()](https://pytorch.org/docs/stable/generated/torch.min.html#torch-min) shows its description as below three:
https://github.com/pytorch/pytorch/blob/f38dae76ee8dccd60f99bbddb48f2520f436fa1a/torch/_torch_docs.py#L7106-L7111
https://github.com/pytorch/pytorch/blob/f38dae76ee8dccd60f99bbddb48f2520f436fa1a/torch/_torch_docs.py#L7121-L7143
https://github.com/pytorch/pytorch/blob/f38dae76ee8dccd60f99bbddb48f2520f436fa1a/torch/_torch_docs.py#L7156-L7159
But for signature, it shows that:
```text
@overload
def min(input: Tensor, *, out: Optional[Tensor] = None) -> Tensor:
@overload
def min(input: Tensor, other: Tensor, *, out: Optional[Tensor] = None) -> Tensor:
@overload
def min(input: Tensor, dim: _int, keepdim: _bool = False, *, out: Union[Tensor, Tuple[Tensor, ...], List[Tensor], None] = None) -> torch.return_types.min:
@overload
def min(input: Tensor, dim: Union[str, ellipsis, None], keepdim: _bool = False, *, out: Union[Tensor, Tuple[Tensor, ...], List[Tensor], None] = None) -> torch.return_types.min:
```
So with `dim` and `other` both not specified, `out` can be used.
Tried repro below, which works well and verifies the truth:
### Repro
```python
import torch
input_data = torch.randn(3, 3)
torch.min(input_data, out=torch.tensor(0.))
```
### Output
```text
tensor(-1.6479)
```
Further, some other funcs:
* Like [torch.max()](https://pytorch.org/docs/stable/generated/torch.max.html#torch.max), [torch.all()](https://pytorch.org/docs/stable/generated/torch.all.html#torch.all) also remain these inconsistencies.
* Like [torch.any()](https://pytorch.org/docs/stable/generated/torch.any.html#torch.any) don't have any description of params in its basic usage.
So suggest to fix the basic description of `torch.min()`, `torch.max()`, `torch.all()`, `torch.any()`, and if possible, any other similar funcs.
Thanks for noting.
### Suggest a potential alternative/fix
* Suggest to fix the basic description of `torch.min()`, `torch.max()`, `torch.all()`, `torch.any()`, and if possible, any other similar funcs.
cc @svekars @sekyondaMeta @AlannaBurke
| true
|
3,019,474,471
|
[c10d] Allow split_group to work with non nccl backends
|
deepshah133
|
open
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 2
|
CONTRIBUTOR
|
Summary:
Currently things are hardcoded to only work with nccl backend. Extend it
to allow NCCL + custom plugin backend.
The split-specific methods/attributes have not been added to the base
Backend and Options as some of them are specific to backend implementations.
Instead, explicit checks have been added to the split_group method for the
expected methods and attributes.
I am open to making them part of base Backend based if folks prefer.
Test Plan:
CI
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,019,307,806
|
[Docs] Add Description of `validate_args` for torch.distributions
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"skip-url-lint"
] | 16
|
CONTRIBUTOR
|
Fixes #152165
| true
|
3,019,271,933
|
`torch._inductor.exc.InductorError: CppCompileError: C++ compile error` after Torch 2.7 Release
|
BillHuang2001
|
closed
|
[
"module: regression",
"oncall: pt2",
"oncall: cpu inductor"
] | 5
|
NONE
|
### 🐛 Describe the bug
Following the release of Torch 2.7, we are encountering errors in our CI pipeline.
<details>
<summary>CI history</summary>
Repo: https://github.com/EMI-Group/evox
Since: commit d62696a72e6c6ed161ad2ca840a7bf097d98a2d3, the day torch 2.7 released.
Raw log: [link](https://productionresultssa11.blob.core.windows.net/actions-results/3add535a-ab47-4b18-8650-d784944dd139/workflow-job-run-d98757c5-d0da-5665-a5e9-f6125474e3e5/logs/job/job-logs.txt?rsct=text%2Fplain&se=2025-04-25T07%3A36%3A11Z&sig=z%2FPmYAuVXva0ND9IQhUmGxT5Am2Dck7NL0InzcjtSok%3D&ske=2025-04-25T18%3A12%3A55Z&skoid=ca7593d4-ee42-46cd-af88-8b886a2f84eb&sks=b&skt=2025-04-25T06%3A12%3A55Z&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skv=2025-01-05&sp=r&spr=https&sr=b&st=2025-04-25T07%3A26%3A06Z&sv=2025-01-05)
</details>
We’ve provided a minimal code snippet that can reproduce the issue. However, this reproducer is NOT reliable across all systems — the error appears on some machines but not others. We are still investigating the possible cause of this inconsistency.
```python
import torch
def cal_hv(fit: torch.Tensor, ref: torch.Tensor, pop_size: int, n_sample: int):
n, m = fit.size()
alpha = torch.cumprod(torch.cat([torch.ones(1, device=fit.device), (pop_size - torch.arange(1, n, device=fit.device)) / (n - torch.arange(1, n, device=fit.device))]), dim=0) / torch.arange(1, n + 1, device=fit.device)
alpha = torch.nan_to_num(alpha)
f_min = torch.min(fit, dim=0).values
samples = torch.rand(n_sample, m, device=fit.device) * (ref - f_min) + f_min
ds = torch.zeros(n_sample, dtype=torch.int64, device=fit.device)
pds = (fit.unsqueeze(0).expand(n_sample, -1, -1) - samples.unsqueeze(1).expand(-1, n, -1) <= 0).all(dim=2)
ds = torch.sum(torch.where(pds, ds.unsqueeze(1) + 1, ds.unsqueeze(1)), dim=1)
ds = torch.where(ds == 0, ds, ds - 1)
temp = torch.where(pds.T, ds.unsqueeze(0), -1)
value = torch.where(temp != -1, alpha[temp], torch.tensor(0, dtype=torch.float32))
f = torch.sum(value, dim=1)
f = f * torch.prod(ref - f_min) / n_sample
return f
n_objs = 3
pop_size = 4
arr = torch.zeros((pop_size, 10), dtype=torch.float32)
fit = torch.tensor([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6], [0.7, 0.8, 0.9], [1.0, 1.1, 1.2]], dtype=torch.float32)
ref = torch.full((n_objs,), torch.max(fit).item() * 1.2)
n_sample = 10000
fn = torch.compile(cal_hv)
hv = fn(fit, ref, pop_size, n_sample)
print("HV:", hv)
```
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/home/bill/Source/evox/torch_bug.py", line 36, in <module>
hv = fn(fit, ref, pop_size, n_sample)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1432, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1213, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 598, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1059, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 761, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 797, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 257, in _fn
return fn(*args, **kwargs)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 715, in transform
tracer.run()
~~~~~~~~~~^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3500, in run
super().run()
~~~~~~~~~~~^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1337, in run
while self.step():
~~~~~~~~~^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1246, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3701, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3686, in _return
self.output.compile_subgraph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self,
^^^^^
...<2 lines>...
),
^^
)
^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1144, in compile_subgraph
self.compile_and_call_fx_graph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
tx, list(reversed(stack_values)), root, output_replacements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1437, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1487, in call_user_compiler
return self._call_user_compiler(gm)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1519, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/__init__.py", line 2347, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2101, in compile_fx
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 2089, in compile_fx
return aot_autograd(
~~~~~~~~~~~~~
...<6 lines>...
cudagraphs=cudagraphs,
~~~~~~~~~~~~~~~~~~~~~~
)(model_, example_inputs_)
~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1160, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
dispatch_and_compile,
...<5 lines>...
remote,
)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 775, in load
compiled_fn = dispatch_and_compile()
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1145, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
functional_call,
^^^^^^^^^^^^^^^^
...<3 lines>...
shape_env,
^^^^^^^^^^
)
^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
flat_fn, fake_flat_args, aot_config, fake_mode, shape_env
)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 820, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
~~~~~~~~~~~^
flat_fn,
^^^^^^^^
...<2 lines>...
fw_metadata=fw_metadata,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 219, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 479, in __call__
return self.compiler_fn(gm, example_inputs)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1944, in fw_compiler_base
return inner_compile(
gm,
...<5 lines>...
boxed_forward_device_index=forward_device,
)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 628, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
gm,
^^^
example_inputs,
^^^^^^^^^^^^^^^
**kwargs,
^^^^^^^^^
)
^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 760, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
e.__traceback__
) from None
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 745, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
gm, example_inputs, inputs_to_check, **graph_kwargs
)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1295, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1197, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
~~~~~~~~~~~~~~~~~~~~~~~^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2083, in compile_to_module
return self._compile_to_module()
~~~~~~~~~~~~~~~~~~~~~~~^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2130, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
key,
...<2 lines>...
attrs={**self.constants, **self.torchbind_constants},
)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 2747, in load_by_key_path
mod = _reload_python_module(key, path)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/runtime/compile_tasks.py", line 36, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_bill/22/c22sy4vgisefm32p64mx4pq4545w2unueb73mqgmai5yzpafu433.py", line 369, in <module>
async_compile.wait(globals())
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 424, in wait
self._wait_futures(scope)
~~~~~~~~~~~~~~~~~~^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/async_compile.py", line 445, in _wait_futures
scope[key] = result.result()
~~~~~~~~~~~~~^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 3224, in result
return self.result_fn()
~~~~~~~~~~~~~~^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 2242, in future
result = get_result()
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 2050, in load_fn
future.result()
~~~~~~~~~~~~~^^
File "/home/bill/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/lib/python3.13/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
~~~~~~~~~~~~~~~~~^^
File "/home/bill/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/bill/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/codecache.py", line 2079, in _worker_compile_cpp
cpp_builder.build()
~~~~~~~~~~~~~~~~~^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/cpp_builder.py", line 1601, in build
run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/cpp_builder.py", line 355, in run_compile_cmd
_run_compile_cmd(cmd_line, cwd)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/_inductor/cpp_builder.py", line 350, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._inductor.exc.InductorError: CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_bill/gl/cgllfn3yhasmcryiebiyv3ivgh2b42gg2esf45xanbduygllpzir.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_AVX2 -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/home/bill/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/include/python3.13 -I/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/include -I/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -mavx2 -mfma -mf16c -D_GLIBCXX_USE_CXX11_ABI=1 -ltorch -ltorch_cpu -ltorch_python -lgomp -L/home/bill/.local/share/uv/python/cpython-3.13.1-linux-x86_64-gnu/lib -L/home/bill/misc/tmp/.venv/lib/python3.13/site-packages/torch/lib -o /tmp/torchinductor_bill/gl/cgllfn3yhasmcryiebiyv3ivgh2b42gg2esf45xanbduygllpzir.so
Output:
/tmp/torchinductor_bill/gl/cgllfn3yhasmcryiebiyv3ivgh2b42gg2esf45xanbduygllpzir.cpp: In function ‘void kernel(float*, const int64_t*, const float*, const float*)’:
/tmp/torchinductor_bill/gl/cgllfn3yhasmcryiebiyv3ivgh2b42gg2esf45xanbduygllpzir.cpp:25:23: error: redeclaration of ‘float tmp_acc0_arr [8]’
25 | float tmp_acc0_arr[8];
| ^~~~~~~~~~~~
/tmp/torchinductor_bill/gl/cgllfn3yhasmcryiebiyv3ivgh2b42gg2esf45xanbduygllpzir.cpp:13:23: note: ‘float tmp_acc0_arr [8]’ previously declared here
13 | float tmp_acc0_arr[8];
| ^~~~~~~~~~~~
```
</details>
<details>
<summary>cgllfn3yhasmcryiebiyv3ivgh2b42gg2esf45xanbduygllpzir.cpp</summary>
```cpp
#include "/tmp/torchinductor_bill/pi/cpicxudqmdsjh5cm4klbtbrvy2cxwr7whxl3md2zzdjdf3orvfdf.h"
extern "C" void kernel(float* in_out_ptr0,
const int64_t* in_ptr0,
const float* in_ptr1,
const float* in_ptr2)
{
auto out_ptr0 = in_out_ptr0;
{
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(4L); x0+=static_cast<int64_t>(8L))
{
{
float tmp_acc0_arr[8];
for (int i = 0; i < 8; i++)
{
tmp_acc0_arr[i] = 0;
}
float tmp_acc0 = 0;
at::vec::Vectorized<float> tmp_acc0_vec = at::vec::Vectorized<float>(0);
at::vec::Vectorized<float> tmp_acc0_vec_arr[8];
for (int i = 0; i < 8; i++)
{
tmp_acc0_vec_arr[i] = at::vec::Vectorized<float>(0);
}
float tmp_acc0_arr[8];
for (int i = 0; i < 8; i++)
{
tmp_acc0_arr[i] = 0;
}
#pragma omp parallel num_threads(8)
{
int tid = omp_get_thread_num();
at::vec::Vectorized<float> tmp_acc0_vec_local = at::vec::Vectorized<float>(0);
float tmp_acc0_local = 0;
#pragma omp for
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(10000L); x1+=static_cast<int64_t>(1L))
{
{
if(C10_LIKELY(x0 >= static_cast<int64_t>(0L) && x0 < static_cast<int64_t>(4L)))
{
for (int64_t x0_tail = static_cast<int64_t>(0L);x0_tail < static_cast<int64_t>(4L); x0_tail++)
{
auto tmp0 = in_ptr0[static_cast<int64_t>(x0_tail + 4L*x1)];
auto tmp1 = static_cast<int64_t>(-1);
auto tmp2 = tmp0 != tmp1;
auto tmp3 = 4L;
auto tmp4 = c10::convert<int64_t>(tmp3);
auto tmp5 = decltype(tmp0)(tmp0 + tmp4);
auto tmp6 = tmp0 < 0;
auto tmp7 = tmp6 ? tmp5 : tmp0;
auto tmp8 = tmp7;
auto tmp9 = c10::convert<int64_t>(tmp8);
TORCH_CHECK((0 <= tmp9) & (tmp9 < 4L), "index out of bounds: 0 <= tmp9 < 4L");
auto tmp11 = in_ptr1[static_cast<int64_t>(tmp7)];
auto tmp12 = 1L + tmp7;
auto tmp13 = c10::convert<float>(tmp12);
auto tmp14 = tmp11 / tmp13;
auto tmp15 = std::numeric_limits<float>::infinity();
auto tmp16 = tmp14 == tmp15;
auto tmp17 = -std::numeric_limits<float>::infinity();
auto tmp18 = tmp14 == tmp17;
auto tmp19 = std::isnan(tmp14);
auto tmp20 = static_cast<float>(0.0);
auto tmp21 = tmp19 ? tmp20 : tmp14;
auto tmp22 = static_cast<float>(-3.4028234663852886e+38);
auto tmp23 = tmp18 ? tmp22 : tmp21;
auto tmp24 = static_cast<float>(3.4028234663852886e+38);
auto tmp25 = tmp16 ? tmp24 : tmp23;
auto tmp26 = tmp2 ? tmp25 : tmp20;
tmp_acc0_arr[x0_tail - static_cast<int64_t>(0L)] = tmp_acc0_arr[x0_tail - static_cast<int64_t>(0L)] + tmp26;
}
}
}
}
tmp_acc0_vec_arr[tid] = tmp_acc0_vec_local;
tmp_acc0_arr[tid] = tmp_acc0_local;
}
for (int tid = 0; tid < 8; tid++)
{
tmp_acc0_vec = tmp_acc0_vec + tmp_acc0_vec_arr[tid];
}
for (int tid = 0; tid < 8; tid++)
{
tmp_acc0 = tmp_acc0 + tmp_acc0_arr[tid];
}
if(C10_UNLIKELY(x0 >= static_cast<int64_t>(0L) && x0 < static_cast<int64_t>(4L)))
{
for (int64_t x0_tail = static_cast<int64_t>(0L);x0_tail < static_cast<int64_t>(4L); x0_tail++)
{
in_out_ptr0[static_cast<int64_t>(x0_tail)] = tmp_acc0_arr[x0_tail - static_cast<int64_t>(0L)];
}
}
}
}
}
{
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(4L); x0+=static_cast<int64_t>(8L))
{
{
if(C10_LIKELY(x0 >= static_cast<int64_t>(0L) && x0 < static_cast<int64_t>(4L)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(4L));
auto tmp1 = in_ptr2[static_cast<int64_t>(0L)];
auto tmp2 = at::vec::Vectorized<float>(tmp1);
auto tmp3 = tmp0 * tmp2;
auto tmp4 = static_cast<float>(0.0001);
auto tmp5 = at::vec::Vectorized<float>(tmp4);
auto tmp6 = tmp3 * tmp5;
tmp6.store(in_out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(4L));
}
}
}
}
}
// Python bindings to call kernel():
#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <sstream>
#include <cstdlib>
#ifndef _MSC_VER
#if __cplusplus < 202002L
// C++20 (earlier) code
// https://en.cppreference.com/w/cpp/language/attributes/likely
#define likely(x) __builtin_expect(!!(x), 1)
#define unlikely(x) __builtin_expect(!!(x), 0)
#endif
#else
#define likely(x) (x)
#define unlikely(x) (x)
#endif
// This is defined in guards.cpp so we don't need to import PyTorch headers that are slooow.
// We manually link it below to workaround issues with fbcode build.
static void* (*_torchinductor_pyobject_tensor_data_ptr)(PyObject* obj);
template <typename T> static inline T parse_arg(PyObject* args, size_t n) {
static_assert(std::is_pointer_v<T>, "arg type must be pointer or long");
return static_cast<T>(_torchinductor_pyobject_tensor_data_ptr(PyTuple_GET_ITEM(args, n)));
}
template <> inline int64_t parse_arg<int64_t>(PyObject* args, size_t n) {
auto result = PyLong_AsSsize_t(PyTuple_GET_ITEM(args, n));
if(unlikely(result == -1 && PyErr_Occurred()))
throw std::runtime_error("expected int arg");
return result;
}
template <> inline uintptr_t parse_arg<uintptr_t>(PyObject* args, size_t n) {
auto result = PyLong_AsVoidPtr(PyTuple_GET_ITEM(args, n));
if(unlikely(result == reinterpret_cast<void*>(-1) && PyErr_Occurred()))
throw std::runtime_error("expected int arg");
return reinterpret_cast<uintptr_t>(result);
}
static PyObject* kernel_py(PyObject* self, PyObject* args) {
try {
if(unlikely(!PyTuple_CheckExact(args)))
throw std::runtime_error("tuple args required");
if(unlikely(PyTuple_GET_SIZE(args) != 4))
throw std::runtime_error("requires 4 args");
kernel(parse_arg<float*>(args, 0), parse_arg<int64_t*>(args, 1), parse_arg<float*>(args, 2), parse_arg<float*>(args, 3)); Py_RETURN_NONE;
} catch(std::exception const& e) {
PyErr_SetString(PyExc_RuntimeError, e.what());
return nullptr;
} catch(...) {
PyErr_SetString(PyExc_RuntimeError, "unhandled error");
return nullptr;
}
}
static PyMethodDef py_methods[] = {
{"kernel", kernel_py, METH_VARARGS, ""},
{NULL, NULL, 0, NULL}};
static struct PyModuleDef py_module =
{PyModuleDef_HEAD_INIT, "kernel", NULL, -1, py_methods};
PyMODINIT_FUNC PyInit_kernel(void) {
const char* str_addr = std::getenv("_TORCHINDUCTOR_PYOBJECT_TENSOR_DATA_PTR");
if(!str_addr) {
PyErr_SetString(PyExc_RuntimeError, "_TORCHINDUCTOR_PYOBJECT_TENSOR_DATA_PTR must be set");
return nullptr;
}
std::istringstream iss(str_addr);
uintptr_t addr = 0;
iss >> addr;
_torchinductor_pyobject_tensor_data_ptr =
reinterpret_cast<decltype(_torchinductor_pyobject_tensor_data_ptr)>(addr);
PyObject* module = PyModule_Create(&py_module);
if (module == NULL) {
return NULL;
}
#ifdef Py_GIL_DISABLED
PyUnstable_Module_SetGIL(mod, Py_MOD_GIL_NOT_USED);
#endif
return module;
}
```
</details>
### Versions
On an unaffected system:
```
Collecting environment information...
PyTorch version: 2.7.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.13.2 (main, Feb 5 2025, 19:11:32) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-6.1.0-25-amd64-x86_64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
CPU(s) scaling MHz: 38%
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 108 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.7.0+cpu
[conda] Could not collect
```
On a system affected by this bug:
```
Collecting environment information...
PyTorch version: 2.7.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: NixOS 25.05 (Warbler) (x86_64)
GCC version: (GCC) 14.2.1 20250322
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.40
Python version: 3.13.1 (main, Jan 5 2025, 05:33:47) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-6.14.3-x86_64-with-glibc2.40
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 PRO 6850HS with Radeon Graphics
CPU family: 25
Model: 68
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 37%
CPU max MHz: 4787.0000
CPU min MHz: 400.0000
BogoMIPS: 6388.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca debug_swap
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Ghostwrite: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.7.0+cpu
[conda] Could not collect
```
All Python environments are freshly created with the following commands
```bash
uv venv --python 3.13 # python version doesn't seem to matter here
uv pip install -U torch numpy --index-url https://download.pytorch.org/whl/cpu
```
We also attempted to downgrade the GCC version to 13 and 12 on a system with the error, but that had no effect.
cc @chauhang @penguinwu
| true
|
3,019,266,439
|
/usr/local/lib/python3.11/dist-packages/torch/autograd/graph.py:825: UserWarning: grid_sampler_2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'.
|
flydragon2018
|
open
|
[
"triaged",
"enhancement",
"module: determinism"
] | 0
|
NONE
|
### 🐛 Describe the bug
/usr/local/lib/python3.11/dist-packages/torch/autograd/graph.py:825: UserWarning: grid_sampler_2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:91.)
### Versions
kaggle +ultralytic
cc @mruberry @kurtamohler
| true
|
3,019,260,888
|
Generate test reports for pytest when option is given
|
Flamefire
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
The argument needs to be appended when test reports should be generated. IS_CI is not necessarily set, so rather check TEST_SAVE_XML instead as in other places where test reports are conditionally enabled.
See also https://github.com/pytorch/pytorch/issues/126523
| true
|
3,019,235,083
|
DISABLED test_e2e_compile_True_model_type2 (__main__.TestE2ESaveAndLoad)
|
jithunnair-amd
|
open
|
[
"module: rocm",
"triaged",
"skipped"
] | 1
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it's failing on the [MI300 runners](https://hud.pytorch.org/failure?name=periodic-rocm-mi300%20%2F%20linux-focal-rocm-py3.10%20%2F%20test%20(distributed%2C%201%2C%203%2C%20linux.rocm.gpu.mi300.4.test-2%2C%20module%3Arocm%2C%20oncall%3Adistributed)&jobName=linux-focal-rocm-py3.10%20%2F%20test%20(distributed%2C%201%2C%203%2C%20linux.rocm.gpu.mi300.4.test-2%2C%20module%3Arocm%2C%20oncall%3Adistributed)&failureCaptures=distributed%2Fcheckpoint%2Fe2e%2Ftest_e2e_save_and_load.py%3A%3ATestE2ESaveAndLoad%3A%3Atest_e2e_compile_True_model_type2)
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,019,231,126
|
DISABLED test_e2e_compile_True_model_type0 (__main__.TestE2ESaveAndLoad)
|
jithunnair-amd
|
open
|
[
"module: rocm",
"triaged",
"skipped"
] | 1
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it's failing on the [MI300 runners](https://hud.pytorch.org/failure?name=periodic-rocm-mi300%20%2F%20linux-focal-rocm-py3.10%20%2F%20test%20(distributed%2C%201%2C%203%2C%20linux.rocm.gpu.mi300.4.test-2%2C%20module%3Arocm%2C%20oncall%3Adistributed)&jobName=linux-focal-rocm-py3.10%20%2F%20test%20(distributed%2C%201%2C%203%2C%20linux.rocm.gpu.mi300.4.test-2%2C%20module%3Arocm%2C%20oncall%3Adistributed)&failureCaptures=distributed%2Fcheckpoint%2Fe2e%2Ftest_e2e_save_and_load.py%3A%3ATestE2ESaveAndLoad%3A%3Atest_e2e_compile_True_model_type0)
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,019,174,785
|
Generate test reports for pytest when option is given
|
Flamefire
|
closed
|
[
"oncall: distributed",
"module: cpu",
"module: mkldnn",
"module: amp (automated mixed precision)",
"ciflow/trunk",
"release notes: quantization",
"release notes: releng",
"ciflow/mps",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
The argument needs to be appended when test reports should be generated. `IS_CI` is not necessarily set, so rather check `TEST_SAVE_XML` instead as in other places where test reports are conditionally enabled.
See also https://github.com/pytorch/pytorch/issues/126523
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @mcarilli @ptrblck @leslie-fang-intel @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,019,152,638
|
Extend compute_global_tensor_shape to multi dimension sharding
|
dharakk
|
open
|
[
"oncall: distributed",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152166
* #152751
### Summary
`compute_global_tensor_shape` util all gathers shape of the local
tensors from all the ranks and then computes the shape of the global
DTensor based on the device mesh and the placements. Earlier this util
supported only 1D device mesh, extending the util to now support multi
dimension sharding.
Here we take a recursive approach to calculate the global shape via
a DFS on the device mesh.
### Test
`pytest test/distributed/tensor/test_utils.py`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,019,002,411
|
Add Description of `validate_args` in `torch.distributions.`
|
ILCSFNO
|
closed
|
[
"module: distributions",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 📚 The doc issue
The doc of [torch.distributions.weibull.Weibull()](https://pytorch.org/docs/stable/distributions.html#torch.distributions.weibull.Weibull) shows its description as below:
https://github.com/pytorch/pytorch/blob/a936d596f6f7d2bc2dc47b4b2320208b4908e7f2/torch/distributions/weibull.py#L28-L31
But its description is:
```text
class torch.distributions.weibull.Weibull(scale, concentration, validate_args=None)
```
And in codes, I find that the func actually use the param `validate_args`:
https://github.com/pytorch/pytorch/blob/a936d596f6f7d2bc2dc47b4b2320208b4908e7f2/torch/distributions/weibull.py#L39-L54
Further, I find more similar situation in `torch.distributions` in: https://pytorch.org/docs/stable/distributions.html, such as `torch.distributions.distribution.Distribution()`, `torch.distributions.exp_family.ExponentialFamily()`, `torch.distributions.bernoulli.Bernoulli()`, etc.
Suggestions below. Thanks for noting!
### Suggest a potential alternative/fix
* Suggest to fix the description of param `validate_args` in doc of several funcs in: https://pytorch.org/docs/stable/distributions.html
cc @fritzo @neerajprad @alicanb @nikitaved
| true
|
3,018,977,168
|
Less Check on the triangular tensor of `L` in `torch.cholesky_solve()`
|
ILCSFNO
|
closed
|
[
"triaged",
"module: linear algebra"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The doc of [torch.cholesky_solve()](https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html#torch-cholesky-solve) shows its description as below:
https://github.com/pytorch/pytorch/blob/dda0c952e71a540f7ad8d040e35da727b4d91405/torch/_torch_docs.py#L2660-L2662
For L directly be a tensor of symmetric or Hermitian positive-definite matrices, there may be less check on it whether be a lower or upper triangular Cholesky decompositions or not:
Just use the example in doc and change the input `L` to `A`:
### Repro
```python
import torch
A = torch.randn(3, 3)
A = A @ A.T + torch.eye(3) * 1e-3 # Creates a symmetric positive-definite matrix
L = torch.linalg.cholesky(A) # Extract Cholesky decomposition
B = torch.randn(3, 2)
print("L:", L)
print("A:", A)
print("Result:", torch.cholesky_solve(B, A))
```
### Output
```text
L: tensor([[ 2.0328, 0.0000, 0.0000],
[ 1.1285, 0.6342, 0.0000],
[-0.3453, 1.1854, 1.1029]])
A: tensor([[ 4.1322, 2.2941, -0.7019],
[ 2.2941, 1.6758, 0.3621],
[-0.7019, 0.3621, 2.7406]])
Result: tensor([[-0.3205, -0.1527],
[ 0.4799, 0.2472],
[-0.1069, -0.0755]])
```
It can run well yet to solve with `A`.
If this is expected behavior to ignore the check of whether param `L` is a lower or upper triangular Cholesky decompositions or not?
Thanks for noting!
### Versions
Nightly
cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,018,969,617
|
[cutlass backend] add addmm and bmm for cutlass backend benchmark
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152163
Copying what @kadeng did.
```
FINAL results...
Experiment group: bmm (BS: 8, 1024x1024, 1024x1024) torch.float16
+-----------------------+--------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+---------------------+
| aten | 44.454172253608704 | 3.0991086587309837 | NA |
| triton | 44.06978189945221 | 0.07496077567338943 | -0.8646890374284049 |
| triton_persistent_tma | 43.598245829343796 | 0.06154991965740919 | -1.9254130284597197 |
| cutlass_lvl_default | 39.91834074258804 | 0.056073310784995556 | -10.20338762612423 |
+-----------------------+--------------------+----------------------+---------------------+
Experiment group: bmm (BS: 8, 1024x1024, 1024x1024) torch.bfloat16
+-----------------------+-------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+-------------------+----------------------+---------------------+
| aten | 49.05610531568527 | 0.160279156640172 | NA |
| triton | 43.97720843553543 | 0.0660805031657219 | -10.353241145961718 |
| triton_persistent_tma | 43.94153505563736 | 0.061738294549286366 | -10.425960697724962 |
| cutlass_lvl_default | 40.2066633105278 | 0.034127906896173954 | -18.039430460713596 |
+-----------------------+-------------------+----------------------+---------------------+
Average edge over aten (max(-edge, 0), higher is better):
triton: 5.608965091695062 (from 2 valid values)
triton_persistent_tma: 6.175686863092341 (from 2 valid values)
cutlass_lvl_default: 14.121409043418913 (from 2 valid values)
```
Differential Revision: [D73625766](https://our.internmc.facebook.com/intern/diff/D73625766/)
| true
|
3,018,959,763
|
torch.compile fails in FSDP due to .data assignment with different floating type
|
kbabiuchx
|
open
|
[
"triaged",
"module: fsdp",
"oncall: pt2",
"module: aotdispatch",
"module: dynamo",
"module: pt2-dispatcher"
] | 5
|
NONE
|
### 🐛 Describe the bug
When using torch.compile, a runtime error is raised:
`TorchRuntimeError: Failed running call_function <method 'set_' of 'torch._C.TensorBase' objects>(*(FakeTensor(..., size=(3,)), FakeTensor(..., size=(3,), dtype=torch.bfloat16)), **{}):
Could not set tensor of type c10::BFloat16 to a tensor of type float`
from FSDP during buffer preparation in _cast_buffers_to_dtype_and_device
`buffer.data = buffer.to(device=device, dtype=buffer_dtype)`
https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/_runtime_utils.py#L1645
As far as I know, FSDP is the only place in the Torch code where this method of assignment is used. However, reproduction of the bug does not require FSDP and is very short.
Minimal repro:
```python
import torch
@torch.compile
def func(x):
x.data = x.to(dtype=torch.bfloat16)
t = torch.tensor([1,2,3], dtype=torch.float)
func(t)
```
### Error logs
TorchRuntimeError Traceback (most recent call last)
<ipython-input-4-3d237e4f713e> in <cell line: 0>()
6
7 t = torch.tensor([1,2,3], dtype=torch.float)
----> 8 func(t)
27 frames
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py in _fn(*args, **kwargs)
572
573 try:
--> 574 return fn(*args, **kwargs)
575 finally:
576 # Restore the dynamic layer stack depth if necessary.
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py in __call__(self, frame, cache_entry, frame_state)
1378 with compile_lock, _disable_current_modes():
1379 # skip=1: skip this frame
-> 1380 return self._torchdynamo_orig_callable(
1381 frame, cache_entry, self.hooks, frame_state, skip=1
1382 )
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py in __call__(self, frame, cache_entry, hooks, frame_state, skip)
1162 counters["frames"]["total"] += 1
1163 try:
-> 1164 result = self._inner_convert(
1165 frame, cache_entry, hooks, frame_state, skip=skip + 1
1166 )
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py in __call__(self, frame, cache_entry, hooks, frame_state, skip)
545
546 with compile_context(CompileContext(compile_id)):
--> 547 return _compile(
548 frame.f_code,
549 frame.f_globals,
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py in _compile(code, globals, locals, builtins, closure, compiler_fn, one_graph, export, export_constraints, hooks, cache_entry, cache_size, frame, frame_state, compile_id, skip)
984 guarded_code = None
985 try:
--> 986 guarded_code = compile_inner(code, one_graph, hooks, transform)
987
988 # NB: We only put_code_state in success case. Success case here
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py in compile_inner(code, one_graph, hooks, transform)
713 stack.enter_context(torch._dynamo.callback_handler.install_callbacks())
714 stack.enter_context(CompileTimeInstructionCounter.record())
--> 715 return _compile_inner(code, one_graph, hooks, transform)
716
717 return None # dead, but see https://github.com/python/mypy/issues/7577
/usr/local/lib/python3.11/dist-packages/torch/_utils_internal.py in wrapper_function(*args, **kwargs)
93
94 if not StrobelightCompileTimeProfiler.enabled:
---> 95 return function(*args, **kwargs)
96
97 return StrobelightCompileTimeProfiler.profile_compile_time(
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py in _compile_inner(code, one_graph, hooks, transform)
748 CompileContext.get().attempt = attempt
749 try:
--> 750 out_code = transform_code_object(code, transform)
751 break
752 except exc.RestartAnalysis as e:
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/bytecode_transformation.py in transform_code_object(code, transformations, safe)
1359 propagate_line_nums(instructions)
1360
-> 1361 transformations(instructions, code_options)
1362 return clean_and_assemble_instructions(instructions, keys, code_options)[1]
1363
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py in _fn(*args, **kwargs)
229 exit_stack.enter_context(torch_function_mode_stack_state_mgr)
230 try:
--> 231 return fn(*args, **kwargs)
232 finally:
233 cleanup.close()
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py in transform(instructions, code_options)
660 try:
661 with tracing(tracer.output.tracing_context), tracer.set_current_tx():
--> 662 tracer.run()
663 except exc.UnspecializeRestartAnalysis:
664 speculation_log.clear()
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/symbolic_convert.py in run(self)
2866
2867 def run(self):
-> 2868 super().run()
2869
2870 def should_compile_partial_graph(self):
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/symbolic_convert.py in run(self)
1050 try:
1051 self.output.push_tx(self)
-> 1052 while self.step():
1053 pass
1054 except TensorifyScalarRestartAnalysis:
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/symbolic_convert.py in step(self)
960
961 try:
--> 962 self.dispatch_table[inst.opcode](self, inst)
963 return not self.output.should_exit
964 except TensorifyScalarRestartAnalysis:
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/symbolic_convert.py in STORE_ATTR(self, inst)
1814
1815 try:
-> 1816 BuiltinVariable(setattr).call_function(
1817 self, [obj, ConstantVariable.create(inst.argval), val], {} # type: ignore[arg-type]
1818 )
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/variables/builtin.py in call_function(self, tx, args, kwargs)
1002 self.fn, [type(x) for x in args], bool(kwargs)
1003 )
-> 1004 return handler(tx, args, kwargs)
1005
1006 def call_method(
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/variables/builtin.py in builtin_dispatch(tx, args, kwargs)
841
842 def builtin_dispatch(tx: "InstructionTranslator", args, kwargs):
--> 843 rv = handler(tx, args, kwargs)
844 if rv:
845 return rv
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/variables/builtin.py in call_self_handler(tx, args, kwargs)
770 def call_self_handler(tx: "InstructionTranslator", args, kwargs):
771 try:
--> 772 result = self_handler(tx, *args, **kwargs)
773 if result is not None:
774 return result
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/variables/builtin.py in call_setattr(self, tx, obj, name_var, val)
1777 with dynamo_disable_grad(tx), torch.no_grad():
1778 # Step 2 - call `set_`
-> 1779 out = wrap_fx_proxy(
1780 tx,
1781 tx.output.create_proxy(
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/variables/builder.py in wrap_fx_proxy(tx, proxy, example_value, subclass_type, **options)
2151 }
2152 if subclass_type is None:
-> 2153 return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
2154 else:
2155 result = wrap_fx_proxy_cls(target_cls=TensorWithTFOverrideVariable, **kwargs)
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/variables/builder.py in wrap_fx_proxy_cls(target_cls, tx, proxy, example_value, subclass_type, **options)
2217 ):
2218 if example_value is None:
-> 2219 return _wrap_fx_proxy(
2220 target_cls, tx, proxy, example_value, subclass_type, **options
2221 )
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/variables/builder.py in _wrap_fx_proxy(target_cls, tx, proxy, example_value, subclass_type, **options)
2313 # only allow_non_graph_fake in this instance because we handle the non-fake
2314 # cases properly below.
-> 2315 example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
2316
2317 return handle_traced_output(
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/utils.py in get_fake_value(node, tx, allow_non_graph_fake)
2534 unimplemented(f"TypeError {node.target}: {cause}")
2535
-> 2536 raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
2537
2538 if not allow_non_graph_fake:
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/utils.py in get_fake_value(node, tx, allow_non_graph_fake)
2469 try:
2470 with tx.fake_mode, enable_python_dispatcher():
-> 2471 ret_val = wrap_fake_exception(
2472 lambda: run_node(tx.output, node, args, kwargs, nnmodule)
2473 )
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/utils.py in wrap_fake_exception(fn)
2015 def wrap_fake_exception(fn):
2016 try:
-> 2017 return fn()
2018 except UnsupportedFakeTensorException as e:
2019 from .exc import unimplemented
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/utils.py in <lambda>()
2470 with tx.fake_mode, enable_python_dispatcher():
2471 ret_val = wrap_fake_exception(
-> 2472 lambda: run_node(tx.output, node, args, kwargs, nnmodule)
2473 )
2474 except Unsupported:
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/utils.py in run_node(tracer, node, args, kwargs, nnmodule)
2602 unimplemented(make_error_message(e), from_exc=e)
2603 except Exception as e:
-> 2604 raise RuntimeError(make_error_message(e)).with_traceback(
2605 e.__traceback__
2606 ) from e
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/utils.py in run_node(tracer, node, args, kwargs, nnmodule)
2584 try:
2585 if op == "call_function":
-> 2586 return node.target(*args, **kwargs)
2587 elif op == "call_method":
2588 return getattr(args[0], node.target)(*args[1:], **kwargs)
TorchRuntimeError: Failed running call_function <method 'set_' of 'torch._C.TensorBase' objects>(*(FakeTensor(..., size=(3,)), FakeTensor(..., size=(3,), dtype=torch.bfloat16)), **{}):
Could not set tensor of type c10::BFloat16 to a tensor of type float
from user code:
File "<ipython-input-4-3d237e4f713e>", line 5, in func
x.data = x.to(dtype=torch.bfloat16)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.12 (main, Apr 9 2025, 08:55:54) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.123+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4400.44
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.5.3.2
[pip3] nvidia-cuda-cupti-cu12==12.5.82
[pip3] nvidia-cuda-nvrtc-cu12==12.5.82
[pip3] nvidia-cuda-runtime-cu12==12.5.82
[pip3] nvidia-cudnn-cu12==9.3.0.75
[pip3] nvidia-cufft-cu12==11.2.3.61
[pip3] nvidia-curand-cu12==10.3.6.82
[pip3] nvidia-cusolver-cu12==11.6.3.83
[pip3] nvidia-cusparse-cu12==12.5.1.3
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] optree==0.15.0
[pip3] pynvjitlink-cu12==0.5.2
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @chauhang @mori360 @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @bdhirsh
| true
|
3,018,943,411
|
Fix take_along_dim negative index handling (#146211)
|
KaaustaaubShankar
|
open
|
[
"triaged",
"open source",
"release notes: cpp"
] | 5
|
NONE
|
Fixes: #146211
This PR fixes an issue with `torch.take_along_dim()` not correctly handling negative indices. Previously, using negative values in the `indices` tensor caused an out-of-bounds error. This update wraps indices correctly, matching Python-style indexing semantics.
### 🔧 Changes
- Modified `_take_along_dim_helper` to apply modulo logic for dimension-safe negative indexing.
- Added a unit test `test_take_along_dim_negative_indices` to `test/test_indexing.py` to assert correctness of negative indexing behavior.
### 🧪 Testing
```bash
pytest test/test_indexing.py -k test_take_along_dim_negative_indices
```
| true
|
3,018,925,038
|
[Kineto] Enable OOM observer
|
mzzchy
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Summary:
# Context:
When memory leak happens, it usually trigger the OOM in the later iterations. The snapshot of full iteration will be huge and hard to interpret.
On CUDA side, they provide OOM observer which generates snapshot when OOM happens with latest 1,500,000 entries for debugging.
In this diff, we want to implement the feature on MTIA side
Test Plan:
Run this test with last diff in the stack.
```
buck run @//mode/opt kineto/libkineto/fb/mtia/integration_tests:mtia_memory_auto_trace_test
```
As shown, the memory_snapshot is generated when oom happens
Log: P1794792326
Snapshot: https://fburl.com/pytorch_memory_visualizer/lx73y6s3 {F1977402355}
Differential Revision: D71993315
| true
|
3,018,883,048
|
Add dynamo config to HOP-ify context managers
|
soulitzer
|
open
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152159
* #152158
```
# Note [Hopifying Context Managers]
#
# If the context manager class has been added to a opt-in dynamo config,
# we will convert it into a generic context manager HOP. When the
# HOP is later called in AOTAutograd, it will run the captured
# graph under the ctx.
#
# How does this work?
# - On enter:
# - Enter into a new subtracer
# - The subtracer traces the body of the context manager into a graph
# - On exit:
# - Exit the subtracer
# - Grab the inner fx graph from the subtracer and install it onto
# the outer fx graph
# - Create node on the outer fx graph that calls this subgraph
#
# Some notes:
#
# 1. Determining the inputs and outputs
#
# One trickiness here is that a HOP requires a function as input,
# but, unlike functions, context managers don't have explicit inputs and
# outputs. For inputs, we rely on lifted_freevars, which the subtracer
# already tracks for ordinary HOPs. For outputs, ideally we might only
# return the outputs that are referenceable later, e.g. not temporaries,
# but doing that is hard, so instead we just return all intermediates
# and rely on AOTAutograd to trace through the HOP and DCE any
# unnecessary ops.
#
# 2. Fixing VariableTracker proxies
#
# Inserting the call to the subgraph into the outer fx graph creates
# fresh variable trackers and proxies, but the instruction translator
# still refers the original VTs, e.g. in its symbolic locals and these
# VTs still hold the inner fx graph's proxies, which is wrong.
#
# We fix this by updating the VTs to hold the new outer fx graph
# proxies (e.g. corresponding to the getitems of the call to the
# subgraph). To know what those VTs are, we maintain a mapping from
# fx graph node name to the VTs which is updated everytime a new
# TensorVariable is created.
```
From discussion in HOP meeting:
- Why do we need to put the body of the ctx in a HOP? Why not just insert two nodes into the graph?
- There are pregrad passes and we don't want to match across the boundary
- Could there be issues for caching?
- Yes, we store the args to reconstruct a ctx onto the GraphModule, but those args aren't necessarily hashable. In this PR we limit to allowing only constants args, though we leave an exception for AC to pass in policy_fn (we'll deal with that later).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,018,882,968
|
Add AC_TRACER Infra TorchDispatchMode key
|
soulitzer
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152159
* __->__ #152158
Why we need an additional infra mode for the new version of AC?
- For the new version of AC, we want to trace a graph and then replay it (in pieces) during backward. We'd like this graph to have all the user modes and subclasses already desugared so that during recompute (1) we don't need to reenable any ambient modes, and (2) we don't require user logic in the subclasses and modes to execute in the exact same manner.
- Also see https://github.com/soulitzer/ac-experimental/blob/main/_impl/tracer.py
Is it possible to support this with the "NotImplemented" trick we use for subclasses?
- Unfortunately not. Returning NotImplemented does work when you want to handle subclasses before modes, but not if you want to handle certain modes before other modes. This is because each invocation of `handle_torch_function_no_python_arg_parser` processes only the single top-most mode at a time, and will error if what it gets at the end of that is NotImplemented with `TypeError: Multiple dispatch failed for 'torch._ops.aten.cos.default'; all __torch_dispatch__ handlers returned NotImplemented:`
e.g., it is expecting either the top-most mode or at least one of the subclasses args to not return NotImplemented.
Infra-mode ordering
- AC_TRACER should be a higher priority infra mode than functional/fake mode so that during compile we can trace through it.
---
Modes order before this PR:
- user modes
- infra modes for compile
Modes order after this PR:
- user modes
- ac tracer
- infra modes for compile
| true
|
3,018,839,573
|
[Typing] Enable torch.types.IntLikeType / FloatLikeType / BoolLikeType
|
shink
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"suppress-bc-linter"
] | 3
|
CONTRIBUTOR
|
### Changes
Replace `Union[SymInt, int]` and `Union[int, SymInt]` with `IntLikeType`.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,018,819,854
|
Note some limit in docstring of `padding` in Poolnd
|
ILCSFNO
|
closed
|
[
"module: docs",
"module: nn",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 📚 The doc issue
The doc of [torch.nn.functional.avg_pool2d()](https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool2d.html#torch-nn-functional-avg-pool2d) shows its description as below:
https://github.com/pytorch/pytorch/blob/7f28c03fac11dc3cf37da36def7e0857c331843d/torch/nn/functional.py#L396-L397
Repro below shows an error.
### Repro 1
```python
import torch
input_data = torch.randn(1, 3, 4, 4)
pool = torch.nn.functional.avg_pool2d(input_data, (3, 3), stride=1, padding=2)
```
### Output 1
```text
RuntimeError: pad should be at most half of effective kernel size, but got pad=2, kernel_size=3 and dilation=1
```
I accept it should error, but this limit may be added in docstring of `padding`.
Something is similar in `torch.nn.AvgPool2d` or maybe other `poolnd` funcs:
### Repro 2
```python
import torch
input_data = torch.randn(1, 3, 4, 4)
pool = torch.nn.AvgPool2d((3, 3), stride=1, padding=2)
pool(input_data)
```
### Output 2
```text
RuntimeError: pad should be at most half of effective kernel size, but got pad=2, kernel_size=3 and dilation=1
```
Thanks for noting.
### Suggest a potential alternative/fix
* Suggest to note this limit in docstring of `padding` in Poolnd
cc @svekars @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
3,018,791,423
|
torch.compile on MPS fails: generated Metal kernel uses loop-local variable out of scope
|
yusungsim
|
open
|
[
"triaged",
"module: mps",
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
I'm a total newcomer to PyTorch programming. I encountered this bug while trying to run the [example code for nari-labs/dia](https://github.com/nari-labs/dia) on my M2 Mac.
When I ran the example using torch.compile(...), I hit a compile-time error from TorchInductor's Metal backend. Since I wasn't sure how to interpret the error, I asked ChatGPT (GPT-4o) for help. I shared the full error message and even pasted the contents of torch/_inductor/codegen/mps.py, and we discussed where the bug might be coming from.
Sorry in advance if this is a duplicate.
I just hope this bug report helps Torch developers catch an edge case in the new Metal backend and improve support for MPS!
⚠️ The following is a diagnosis and explanation generated by ChatGPT-4o
TorchInductor’s Metal (MPS) backend generates invalid .metal shader code when compiling certain reduction-heavy operations under torch.compile(...). Specifically, it emits code where temporary variables and loop indices (e.g., tmp3, r0_0) are declared inside a loop but accessed after the loop has ended. This violates C++/Metal scoping rules and leads to a hard compile-time SyntaxError.
This issue occurs in multistage reductions, which are triggered when the reduction axis exceeds the maximum threadgroup size (e.g., dimension size > 1024). The faulty code is emitted in torch/_inductor/codegen/mps.py by the MetalKernel.codegen_body() method, which inserts store instructions (self.stores) after the reduction loop, despite the necessary values being defined inside the loop.
As a result, valid high-level PyTorch code fails to compile on MPS devices via TorchInductor, even when eager and CUDA backends work fine.
✅ Minimal Reproducer
```py
x = torch.randn(1, 1028, device="mps")
mask = torch.randint(0, 2, (1, 1028), dtype=torch.bool, device="mps")
def masked_softmax(x, mask):
x = x.masked_fill(mask, float('-inf'))
return torch.nn.functional.softmax(x, dim=-1)
compiled_fn = torch.compile(masked_softmax)
compiled_fn(x, mask) # triggers compile error on Metal
```
💥 Error Message
```
error: use of undeclared identifier 'tmp3'
auto tmp5 = tmp3 - tmp4;
^~~~
error: use of undeclared identifier 'r0_0'
out_ptr2[r0_0] = ...
^~~~
```
Full traceback shows the kernel failing to compile inside:
```
torch/_inductor/codegen/mps.py → MetalKernel.codegen_body
```
### Error logs
****
```
/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/codegen/mps.py:721: UserWarning: torch.compile for Metal is an early protoype and might not work as expected. For details see https://github.com/pytorch/pytorch/issues/150121
_warn_prototype()
Traceback (most recent call last):
File "/Users/yusungsim/Projects/dia-example/ex.py", line 11, in <module>
compiled_fn(x, mask) # ❌ This triggers the bug
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 663, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 760, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 745, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1295, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1197, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2083, in compile_to_module
return self._compile_to_module()
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2130, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2747, in load_by_key_path
mod = _reload_python_module(key, path)
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 36, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/var/folders/p8/r899gbxx0w50d5cb596qrk100000gn/T/torchinductor_yusungsim/3k/c3k5cvrgkkktg6mo4ehlkyk35slkrusgn6cwvu77rzsusxoecv6t.py", line 44, in <module>
mps_lib_0 = compile_mps_shader("""
File "/Users/yusungsim/Projects/dia-example/.env/lib/python3.10/site-packages/torch/_inductor/runtime/runtime_utils.py", line 181, in compile_mps_shader
raise SyntaxError(f"failed to compile {source} with {err.msg}") from err
torch._inductor.exc.InductorError: SyntaxError: failed to compile
#include <c10/metal/random.h>
#include <c10/metal/special_math.h>
#include <c10/metal/utils.h>
#include <c10/metal/reduction_utils.h>
kernel void generated_kernel(
device float* out_ptr2,
constant bool* in_ptr0,
constant float* in_ptr1,
uint2 thread_pos [[thread_position_in_grid]],
uint2 group_pos [[thread_position_in_threadgroup]]
) {
auto xindex = thread_pos.x;
auto r0_index = thread_pos.y;
threadgroup float tmp_acc_0[1024];
tmp_acc_0[r0_index] = ::metal::numeric_limits<float>::lowest();
threadgroup float tmp_acc_1[1024];
for(auto r0_0_cnt = 0; r0_0_cnt < 2; ++r0_0_cnt) {
int r0_0 = 2 * r0_index + r0_0_cnt;
if (r0_0 >= 1028) break;
auto tmp0 = in_ptr0[r0_0];
auto tmp1 = in_ptr1[r0_0];
auto tmp2 = -HUGE_VALF;
auto tmp3 = tmp0 ? tmp2 : tmp1;
tmp_acc_0[r0_index] = ::c10::metal::max(tmp_acc_0[r0_index], tmp3);
}
auto tmp4 = c10::metal::threadgroup_max(tmp_acc_0, 1024);
auto tmp5 = tmp3 - tmp4;
auto tmp6 = metal::exp(tmp5);
tmp_acc_1[r0_index] = tmp6;
auto tmp7 = c10::metal::threadgroup_sum(tmp_acc_1, 1024);
auto tmp8 = tmp6 / tmp7;
out_ptr2[r0_0] = static_cast<float>(tmp8);
}
with program_source:845:25: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare]
for (int idx = 1; idx < size; ++idx) {
~~~ ^ ~~~~
program_source:858:25: warning: comparison of integers of different signs: 'int' and 'unsigned int' [-Wsign-compare]
for (int idx = 1; idx < size; ++idx) {
~~~ ^ ~~~~
program_source:890:21: error: use of undeclared identifier 'tmp3'; did you mean 'tmp4'?
auto tmp5 = tmp3 - tmp4;
^~~~
tmp4
program_source:889:14: note: 'tmp4' declared here
auto tmp4 = c10::metal::threadgroup_max(tmp_acc_0, 1024);
^
program_source:895:18: error: use of undeclared identifier 'r0_0'
out_ptr2[r0_0] = static_cast<float>(tmp8);
^
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Versions
python3 collect_env.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 24493 100 24493 0 0 52902 0 --:--:-- --:--:-- --:--:-- 53015
Collecting environment information...
PyTorch version: 2.7.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 19.1.7
CMake version: version 3.30.2
Libc version: N/A
Python version: 3.10.15 (main, Oct 15 2024, 16:34:09) [Clang 15.0.0 (clang-1500.0.40.1)] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] torch==2.7.0
[pip3] torch-stoi==0.2.3
[pip3] torchaudio==2.7.0
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @chauhang @penguinwu
| true
|
3,018,788,020
|
Some Performance Bug in `tol` of `torch.lobpcg()`
|
ILCSFNO
|
closed
|
[
"triaged",
"module: linear algebra"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Except some doc issues in #152107, there are something related to its performance.
The doc of [torch.lobpcg()](https://pytorch.org/docs/stable/generated/torch.lobpcg.html#torch-lobpcg) shows its description as below:
https://github.com/pytorch/pytorch/blob/d743a7bd85d2d793bc0e2a38d4538276ce06b601/torch/_lobpcg.py#L421-L424
See Repros below different in the value of `tol`:
### Repro 1
```python
import torch
def generate_input_data():
A = torch.randn(10, 10)
A = (A @ A.t())
X = torch.randn(10, 2)
B = torch.eye(10)
return (A, B, X)
(A, B, X) = generate_input_data()
(eigenvalues, eigenvectors) = torch.lobpcg(A=A, B=B, X=X, k=2, method='ortho', tol=1e-07, niter=(- 1))
print('Eigenvalues:', eigenvalues)
print('Eigenvectors:', eigenvectors)
print('')
```
### Output 1
```text
Eigenvalues: tensor([35.8473, 31.5591])
Eigenvectors: tensor([[-0.4646, -0.3603],
[ 0.1947, 0.1568],
[ 0.5241, -0.1421],
[-0.0367, -0.1245],
[-0.1021, -0.7437],
[-0.0699, -0.0554],
[ 0.1352, -0.4456],
[ 0.0325, -0.0316],
[-0.5043, 0.1246],
[-0.4258, 0.1968]])
```
### Repro 2
```python
import torch
def generate_input_data():
A = torch.randn(10, 10)
A = (A @ A.t())
X = torch.randn(10, 2)
B = torch.eye(10)
return (A, B, X)
(A, B, X) = generate_input_data()
(eigenvalues, eigenvectors) = torch.lobpcg(A=A, B=B, X=X, k=2, method='ortho', tol=1e-08, niter=(- 1))
print('Eigenvalues:', eigenvalues)
print('Eigenvectors:', eigenvectors)
print('')
```
### Output 2
```text
# Hangs For More Than 40min!
```
The only change is of `tol` from `1e-07` to `1e-08`, `Repro 1` finished in 2.6s, but for `Repro 2`, program hangs yet for more than 40min.
### Suggestion
Not sure the reason so no suggestions proposed.
### Versions
Nightly
cc @jianyuh @nikitaved @mruberry @walterddr @xwang233 @Lezcano
| true
|
3,018,782,757
|
IGNORE: Test Bazel OIDC Failure
|
zxiiro
|
closed
|
[
"open source",
"release notes: releng",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,018,776,835
|
padding_mode `reflect` works different from others in Conv
|
ILCSFNO
|
open
|
[
"module: nn",
"triaged",
"module: padding"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The doc of [torch.nn.Conv3d()](https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#conv3d) shows its description as below:
https://github.com/pytorch/pytorch/blob/7f28c03fac11dc3cf37da36def7e0857c331843d/torch/nn/modules/conv.py#L621
See repro below:
### Repro
```python
import torch
input_data = torch.randn((1, 1, 10, 10))
conv3d = torch.nn.Conv3d(1, 10, kernel_size=3, padding=1, padding_mode='reflect') # 'zeros', 'reflect', 'replicate' or 'circular'
output = conv3d(input_data)
```
### Output
```text
RuntimeError: Argument #8: Padding size should be less than the corresponding input dimension, but got: padding (1, 1) at dimension 1 of input [1, 1, 10, 10]
```
When I use `padding_mode` as other valid values, such as `'zeros', 'replicate' or 'circular'`, it runs well.
But in `'reflect'` mode, it runs into error.
I'm not sure if it is expected behavior, for there is no description of these modes in docs, so I propose it as a bug.
Maybe is something after #36089.
### Suggestions
* Fix doc to show more details of modes usage, maybe same in other convnd funcs
* If it is unexpected behavior, fix the code
### Versions
Nightly
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
3,018,774,146
|
[dynamic shapes] support SymInt inputs for kthvalue
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 18
|
CONTRIBUTOR
| null | true
|
3,018,722,338
|
[Scaled MM] Update to support on B200 TN, NT, NN, TT Layouts are supported
|
drisspg
|
open
|
[
"module: performance",
"module: cuda",
"triaged",
"module: float8"
] | 0
|
CONTRIBUTOR
|
# Summary
On Sm100 w/ cuda 12.8 cublas supports all 4 variants. We should update our PerTensor scaling kernel to allow for these layouts.
We can also update our recipes in TorchAO to not require this data transposition. Since the MMA atom supports TN,NN,NT,NN we should also update our rowwise scaling kernel to not require this layout.
cc @msaroufim @jerryzh168 @ptrblck @eqy @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
3,018,707,359
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
3,018,702,480
|
[Graph Partition] support ForeachKernelSchedulerNode
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
ForeachKernelSchedulerNode misses outputs_by_name when created with previous nodes. This PR fixes the issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,018,679,075
|
Unify how we create random inputs for auto-tuning
|
masnesral
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152502
* __->__ #152147
Summary: We're creating autotune inputs slightly differently when autotuning in-process vs. in a subprocess: One implementation is in TensorMeta.to_tensor() and another in AlgorithmSelectorCache.benchmark_example_value. Update the TensorMeta logic to be consistent with AlgorithmSelectorCache.benchmark_example_value() and call it from AlgorithmSelectorCache.benchmark_example_value() instead.
Test Plan: Existing unit tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,018,678,657
|
[dynamic shapes] guard_or_false for infer_size
|
pianpwk
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,018,667,407
|
Package const folded graph's cubin file
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Summary: We need to pacakge const folded graph's cubin file into the final .pt2 package.
Fix https://github.com/pytorch/pytorch/issues/152067
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r test_constant_folding_cuda
```
Differential Revision: D73626480
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,018,661,817
|
WIP: divup op
|
msaroufim
|
open
|
[
"module: cpu",
"topic: new features",
"topic: not user facing"
] | 3
|
MEMBER
|
Don't bother reviewing please, this code was not generated by humans and it's mostly for me to understand all the requirements of a new pytorch operator
```
################################################################################
# MEGA-PROMPT: How to Teach an LLM to Add **ANY** New Element-wise PyTorch Op
#
# Copy this entire file verbatim into `llm.txt` (including these borders) and
# feed it to the model you are evaluating. Everything is plain text; there are
# no Markdown tables or rendered HTML — just one big, unformatted box.
#
# Your evaluation harness should:
# • present ONLY this text to the model
# • capture its stdout
# • treat that stdout as a `patch -p1` diff to be applied to a clean PyTorch
# checkout (main branch)
# • build & test (you decide which tests)
#
# The model “passes” when the patch builds and new tests succeed.
#
# This prompt teaches the model:
# 1. The **exact** files that typically need edits for a new operator
# 2. Why each edit is necessary
# 3. How to generalise the pattern for other ops
# 4. What the final diff must look like (no chatter, no logging)
#
# It uses `ceiling_divide`/`divup` as a fully-worked example. The annotated
# reference diff is NOT meant to be applied by the harness; it’s a tutorial
# for the LLM. When you build your own suite, swap-in a different example or
# trim as you like — but **keep the structure** so the model can learn.
################################################################################
──────────────────────────────── SECTION 1 — HIGH-LEVEL OVERVIEW ────────────────────────────────
Implementing a new element-wise op in PyTorch touches four layers:
(A) **Native C++ core (ATen):**
• Add dispatcher stub, CPU/CUDA kernels, and glue functions.
(B) **Public dispatcher metadata:**
• Register schemas in `native_functions.yaml`.
(C) **Python reference / decomposition layer (torch/_refs):**
• Provide a mathematically-correct reference so Autograd & PrimTorch work.
(D) **Python surface & tests:**
• Expose dunder overloads (`Tensor.__op__`) and add tests.
If you forget ANY layer, either the build fails or runtime raises
“operator not found” errors.
──────────────────────────────── SECTION 2 — STEP-BY-STEP CHECKLIST ─────────────────────────────
For *every* new element-wise binary operator **OP** (here `ceiling_divide`
alias `divup`) follow this template:
1. **Dispatcher symbol**
• In the cpp file that already hosts similar ops, add
`DEFINE_DISPATCH(<op_stub>);`
• Add a matching `DECLARE_DISPATCH` to the header.
2. **Kernels**
• Write at least one device kernel (CPU is mandatory, CUDA optional).
• Register each with `REGISTER_DISPATCH(<op_stub>, &<kernel_fn>);`.
3. **Backend-agnostic wrappers**
• Add `*_impl` helpers that create a `TensorIterator`, call the stub, and
return the result.
• Provide `op`, `op_`, and `op_out` overloads for
- Tensor × Tensor
- Tensor × Scalar (wrap scalar via `wrapped_scalar_tensor`)
• Keep scalar wrappers in the same file so Autograd tracing works.
4. **`native_functions.yaml` entries**
• One line per overload; point CPU/MPS (and CUDA if ready) to
the C++ symbols from step 3.
• If a variant is still Composite, mark it `CompositeExplicitAutograd`.
5. **Reference implementation**
• In `torch/_refs/__init__.py` create a decorated
`_make_elementwise_binary_reference` function that calls existing ops.
• For aliases, write a thin function that forwards to the canonical op.
6. **Python dunder support**
• In `torch/_tensor.py` add `__op__` and `__rop__` methods that dispatch to
`torch.OP`.
7. **Call-sites inside PyTorch**
• Replace any ad-hoc math (e.g. `divup(x,y)`) in core helpers with the new
public API to keep codebase consistent.
8. **Tests**
• Add a dedicated `test/test_<op>.py` covering
– Tensor × Tensor, Tensor × Scalar, in-place, alias
– Integers, floats, corner cases (zero, inf, sign combinations)
• Use `onlyCPU` for evaluation harness simplicity (GPU optional).
9. **Final diff hygiene**
• Produce **one** unified diff rooted at repo top.
• No commentary outside diff; harness pipes it straight into `patch`.
• Lines outside the diff (like this prompt) are never printed by the model.
───────────────────────────── SECTION 3 — ANNOTATED REFERENCE DIFF ──────────────────────────────
The block below is a *teaching* diff. Comments start with `//!` so they are
ignored by `patch`. Study why each hunk exists — you’ll copy the pattern when
generating a diff for a *different* operator.
----8<----------------------------------------------------------------------
diff --git a/aten/src/ATen/native/BinaryOps.cpp b/aten/src/ATen/native/BinaryOps.cpp
index f5d5edb6439..8380296da25 100644
--- a/aten/src/ATen/native/BinaryOps.cpp
+++ b/aten/src/ATen/native/BinaryOps.cpp
@@
DEFINE_DISPATCH(div_trunc_stub);
+DEFINE_DISPATCH(div_ceil_stub); //! 1A – new dispatcher symbol
DEFINE_DISPATCH(remainder_stub);
@@
+// Ceiling division implementation
+Tensor& ceiling_divide_out_impl( … ) { //! 3 – backend-agnostic glue
+ auto iter = TensorIterator::binary_op(…);
+ div_ceil_stub(iter.device_type(), iter); //! 3 – call the stub
+ …
+}
+… //! 3 – provide *_impl, _out, _
+
+// Alias for ceiling_divide
+Tensor& divup_out( … ) { return ceiling_divide_out_impl(…); } //! 3 – alias
@@
Tensor mul( … ); //! pre-existing code
@@
+Tensor ceiling_divide(const Tensor& self, const Scalar& other) { … } //! 3 – Scalar wrapper
diff --git a/aten/src/ATen/native/BinaryOps.h b/aten/src/ATen/native/BinaryOps.h
@@
DECLARE_DISPATCH(structured_binary_fn, div_trunc_stub)
+DECLARE_DISPATCH(structured_binary_fn, div_ceil_stub) //! 1B – header
@@
+// Forward declarations so other C++ can call the op
+Tensor& ceiling_divide_out(…);
+Tensor ceiling_divide(…);
+Tensor& divup_out(…);
+Tensor divup(…);
diff --git a/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp b/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp
@@
+void div_ceil_kernel(TensorIteratorBase& iter) { //! 2A – CPU kernel
+ …
+}
@@
+REGISTER_DISPATCH(div_ceil_stub, &div_ceil_kernel) //! 2B – hook kernel
diff --git a/aten/src/ATen/native/cpu/utils.h b/aten/src/ATen/native/cpu/utils.h
@@
-int64_t thread_averge_payload = std::max((int64_t)1, divup(nnz, num_threads));
+int64_t thread_averge_payload = std::max((int64_t)1, at::divup(nnz, num_threads)); //! 7
diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml
@@
+- func: ceiling_divide(Tensor self, Tensor other) -> Tensor //! 4 – schema
+ dispatch:
+ CPU, MPS: ceiling_divide
+…
diff --git a/test/test_divup.py b/test/test_divup.py
+ … //! 8 – new tests
diff --git a/torch/_refs/__init__.py b/torch/_refs/__init__.py
+@_make_elementwise_binary_reference //! 5 – reference
+def ceiling_divide(a, b): …
diff --git a/torch/_tensor.py b/torch/_tensor.py
+ def __divup__(self, other): return torch.ceiling_divide(self, other) //! 6
----8<----------------------------------------------------------------------
──────────────────────────────── SECTION 4 — GENERALISATION RULES ───────────────────────────────
When you implement **another** op (say `logical_xor`):
* Replace every `ceiling_divide` with `logical_xor`, `div_ceil_stub` with
`logical_xor_stub`, etc.
* Kernel math changes, but the scaffolding (DEFINE_DISPATCH → kernel →
REGISTER_DISPATCH → yaml → refs → dunder → tests) stays identical.
* Always provide Tensor × Scalar wrappers even if mathematically trivial; many
internal utilities rely on them.
* If Autograd is needed, mark CompositeExplicitAutograd in YAML OR write a
derivative in `tools/autograd`. (For pure integer/floating ops usually a
composite is fine.)
──────────────────────────────── SECTION 5 — WHAT YOUR OUTPUT MUST BE ───────────────────────────
**The model’s entire stdout** must be a **single unified diff** with NO extra
commentary. Think of it as running `git diff` and pasting the result.
Anything else (print statements, JSON, progress bars) will break `patch`.
Use spaces, not tabs, in diff context lines. Do not truncate large files; the
patch must be self-contained and apply cleanly.
──────────────────────────────── SECTION 6 — END-OF-FILE ────────────────────────────────────────
# Nothing below this line is part of the prompt.
################################################################################
```
| true
|
3,018,627,877
|
Reducer: add check on received data to avoid segfault
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
MEMBER
|
When ncclCommAbort is called it may return invalid/corrupted data to the reducer. This adds a check so we don't read past the end of the tensors leading to a segfault.
While this looks like it could be a security issue it actually isn't since we only read past the end of the buffer, not write.
Fixes #149418
Test plan:
https://gist.github.com/d4l3k/b47c2c95cf9c37e78069e19f1b6ed2c6
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab
| true
|
3,018,606,221
|
[inductor] pass reduction idx to scan inner_fns
|
isuruf
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152142
Closes https://github.com/pytorch/pytorch/pull/151931
Fixes https://github.com/pytorch/pytorch/issues/151738
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,018,557,123
|
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::scatter_reduce.two
|
aboubezari
|
closed
|
[
"module: performance",
"triaged",
"enhancement",
"module: vmap",
"module: functorch"
] | 3
|
NONE
|
### 🐛 Describe the bug
```
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::scatter_reduce.two.
Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:81.)
scattered_1d = torch.scatter_reduce(input=tensor_1d,
```
You can reproduce it with any vmap function with torch.scatter_reduce.
```python
import torch
src = torch.tensor([1., 2., 3., 4., 5., 6.])
index = torch.tensor([1, 1, 0, 1, 2, 1])
input = torch.tensor([1., 2., 3., 4.])
# Simulate a batch dimension of 1
src = src.unsqueeze(0)
index = index.unsqueeze(0)
input = input.unsqueeze(0)
def _fn(inputs):
_src, _index, _input = inputs
return torch.scatter_reduce(_input, 0, _index, _src, reduce="sum")
result = torch.vmap(_fn)((src, index, input))
print(result)
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-1078-gcp-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.183.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.202
BogoMIPS: 4400.40
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 6 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] torch==2.5.1
[pip3] torchvision==0.19.1
[pip3] triton==3.1.0
[conda] numpy 2.1.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchvision 0.19.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @msaroufim @jerryzh168 @zou3519 @Chillee @samdow @kshitij12345
| true
|
3,018,518,193
|
[CI][CD] Unify install_cuda and install_cuda_aarch64 scripts
|
clee2000
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Generalize install_cuda so it can also handle aarch64
Remove install_cuda_aarch64 since install_cuda can now handle it
Make install_cuda and install_cudnn functions in the install_cuda script because most of the code is the same
| true
|
3,018,493,412
|
Check integrity of bytes in AppendingByteSerializer
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152139
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,018,419,873
|
[ONNX] Add group_norm support from opset 21
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 8
|
COLLABORATOR
|
I didn't run the model in test because ORT doesn't have the op yet. Nevertheless it should be leveraged for newer opset versions.
| true
|
3,018,390,564
|
[ca] expecttest and adjust a few tests
|
xmfan
|
open
|
[
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.