id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,800,314,548
|
Dynamo graph break on PEP585 generic types
|
aorenste
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
If you change test/dynamo/test_misc.py test_function_annotation() from:
```python
def inner(y: typing.List[Variable]):
return x + 1
```
to
```python
def inner(y: list[Variable]):
return x + 1
```
then dynamo will graph break instead of treating it the same. This doesn't seem to happen on py3.9 but does happen on py3.12. Didn't try other versions.
### Versions
current main branch
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,800,218,474
|
The `sympy` dependency spec for pytorch on PyPi wheel is still unchanged.
|
stevenleeS0ht
|
closed
|
[
"oncall: releng",
"triaged",
"module: third_party"
] | 4
|
NONE
|
When run `pip install -U torch`, it will still require `sympy==1.13.1` and will uninstall the latest `sympy` which is version `1.13.3` in the virtual environment.
The reason is due to the dependency spec unchanged in `setup.py`.
| true
|
2,800,213,713
|
update sympy version 1.13.3 in setup.py (previously update only in requirement.txt)
|
stevenleeS0ht
|
open
|
[
"open source",
"Stale",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 14
|
NONE
|
Previously, only update `sympy` version number in `requirement.txt`, but `setup.py` is unchanged. In PyPI, the wheel will relay on the dependency spec in `setup.py`, so only change in `setup.py` will be effective.
| true
|
2,800,036,058
|
Raise MutationError if there are side effects when returning generator
|
guilhermeleobas
|
closed
|
[
"open source",
"Merged",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142513
* __->__ #145223
* #144420
* #144424
* #144423
* #144422
* #144421
* #141055
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,799,780,114
|
Refactoring Distributed test cases to be device agnostic [1/n]
|
AnantGulati
|
closed
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"open source",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"module: compiled autograd"
] | 19
|
CONTRIBUTOR
|
In this series of PR we intend to refactoring distributed test cases to enable to be completely device agnostic.
These changes will include the following approaches to do the same :
- Allowing for multiple device types using instantiate_device_type_test
- Replacing calls to cuda stream with torch.get_device_module(device) wherever it applies
- Skipping set up steps required while using MultiProcessTestCase with DistributedTestBase (#138216) wherever applicable
- Replacing explicit calls to distributed backend (NCCL,HCCL,etc) with get_default_backend_for_device (#140536).
This should result in significant improvement in usability for all devices
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan
| true
|
2,799,502,744
|
make latexpdf
|
dimpy-cmd
|
closed
|
[
"module: docs",
"module: ci",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
DEPRECATION: Legacy editable install of pytorch_sphinx_theme from git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme (from -r requirements.txt (line 2)) (setup.py develop) is deprecated. pip 25.0 will enforce this behaviour change. A possible replacement is to add a pyproject.toml or enable --use-pep517, and use setuptools >= 64. If the resulting installation is not behaving as expected, try using --config-settings editable_mode=compat. Please consult the setuptools documentation for more information. Discussion can be found at https://github.com/pypa/pip/issues/11
### Versions
pip 25 will break pip install -e
So from 2025 pip is no longer supporting this in command make latexpdf.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,799,382,763
|
Regression in the compilation of the torch.all operation in PyTorch version 2.6.0 compared to 2.5.1
|
wdziurdz
|
open
|
[
"triaged",
"module: regression",
"oncall: pt2",
"module: dynamo",
"module: empty tensor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
There is an issue with tracing after upgrading to PyTorch 2.6.0 from 2.5.1. It appears to be a regression related to compiling the torch.all operation.
Before the upgrade, the code below compiles without any graph breaks in PyTorch 2.5.1:
```python
import torch
@torch.compile(backend="inductor")
def compiled_fn(input_tensor: torch.Tensor):
output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
result = torch.all(input_tensor, dim=2, out=output_tensor)
return result
if __name__ == "__main__":
input_tensor = torch.randint(0, 2, (2, 3, 4), dtype=torch.bool, device="cpu")
output = compiled_fn(input_tensor)
```
The code compiles to the following FX graph in PyTorch 2.5.1:
```
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] TRACED GRAPH
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] /home/user1/venv1/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] def forward(self, L_input_tensor_: "b8[2, 3, 4][12, 4, 1]cpu"):
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] l_input_tensor_ = L_input_tensor_
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] # File: tests/compile/test_all.py:5 in compiled_fn, code: output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] empty: "b8[2, 3][3, 1]cpu" = torch.empty((0,), dtype = torch.bool)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] output_tensor: "b8[2, 3][3, 1]cpu" = empty.to(device(type='cpu')); empty = None
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] # File: tests/compile/test_all.py:6 in compiled_fn, code: result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] result: "b8[2, 3][3, 1]cpu" = torch.all(l_input_tensor_, dim = 2, out = output_tensor); l_input_tensor_ = output_tensor = None
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] return (result,)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
```
However, after upgrading to PyTorch 2.6.0, the code fails to compile to the same graph and results in graph breaks:
```
V0120 14:57:46.684000 74548 torch/_dynamo/output_graph.py:972] [0/0_1] COMPILING GRAPH due to GraphCompileReason(reason='out variants with resizing on graph inputs', user_stack=[<FrameSummary file tests/compile/test_all.py, line 6 in compiled_fn>], graph_break=True)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1615] [0/0_1] REMOVE UNUSED GRAPHARG L['input_tensor']
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] TRACED GRAPH
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] ===== __compiled_fn_2 =====
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] /home/user1/venv1/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] def forward(self):
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] # File: tests/compile/test_all.py:5 in compiled_fn, code: output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] empty: "b8[0][1]cpu" = torch.empty((0,), dtype = torch.bool)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] output_tensor: "b8[0][1]cpu" = empty.to(device(type='cpu')); empty = None
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] return (output_tensor,)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code]
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code]
```
Please investigate this regression.
Full logs 2.5.1:
```
V0120 14:51:10.919000 72022 torch/_dynamo/convert_frame.py:864] [0/0] torchdynamo start compiling compiled_fn tests/compile/test_all.py:3, stack (elided 5 frames):
V0120 14:51:10.919000 72022 torch/_dynamo/convert_frame.py:864] [0/0] File "tests/compile/test_all.py", line 14, in <module>
V0120 14:51:10.919000 72022 torch/_dynamo/convert_frame.py:864] [0/0] output = compiled_fn(input_tensor)
V0120 14:51:10.919000 72022 torch/_dynamo/convert_frame.py:864] [0/0]
I0120 14:51:10.920000 72022 torch/_dynamo/utils.py:859] [0/0] ChromiumEventLogger initialized with id 11952b32-9bff-4a1f-ae82-08757a4285ab
I0120 14:51:10.921000 72022 torch/_dynamo/logging.py:57] [0/0] Step 1: torchdynamo start tracing compiled_fn tests/compile/test_all.py:3
V0120 14:51:10.922000 72022 torch/fx/experimental/symbolic_shapes.py:2498] [0/0] create_env
V0120 14:51:10.939000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:5 in compiled_fn (compiled_fn)
V0120 14:51:10.939000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:51:10.940000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:51:10.941000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR empty [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:51:10.942000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_CONST (0,) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>)]
V0120 14:51:10.942000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>), TupleVariable(length=1)]
V0120 14:51:10.943000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR bool [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>), TupleVariable(length=1), PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:51:10.944000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('dtype',) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>), TupleVariable(length=1), ConstantVariable()]
V0120 14:51:10.944000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 2 [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>), TupleVariable(length=1), ConstantVariable(), TupleVariable(length=1)]
V0120 14:51:10.947000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR to [TensorVariable()]
V0120 14:51:10.947000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_FAST input_tensor [GetAttrVariable()]
V0120 14:51:10.948000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR device [GetAttrVariable(), LazyVariableTracker()]
V0120 14:51:10.948000 72022 torch/_dynamo/output_graph.py:2107] [0/0] create_graph_input L_input_tensor_ L['input_tensor']
V0120 14:51:10.949000 72022 torch/_dynamo/variables/builder.py:2702] [0/0] wrap_to_fake L['input_tensor'] (2, 3, 4) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None, None], constraint_strides=[None, None, None], view_base_context=None, tensor_source=LocalSource(local_name='input_tensor', cell_or_freevar=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0120 14:51:10.951000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [GetAttrVariable(), ConstantVariable()]
V0120 14:51:10.952000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE STORE_FAST output_tensor [TensorVariable()]
V0120 14:51:10.953000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:6 in compiled_fn (compiled_fn)
V0120 14:51:10.953000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:51:10.953000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:51:10.953000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR all [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:51:10.954000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_FAST input_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>)]
V0120 14:51:10.954000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_CONST 2 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>), TensorVariable()]
V0120 14:51:10.955000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_FAST output_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>), TensorVariable(), ConstantVariable()]
V0120 14:51:10.955000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('dim', 'out') [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>), TensorVariable(), ConstantVariable(), TensorVariable()]
V0120 14:51:10.956000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 3 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>), TensorVariable(), ConstantVariable(), TensorVariable(), TupleVariable(length=2)]
V0120 14:51:10.959000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE STORE_FAST result [TensorVariable()]
V0120 14:51:10.960000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:7 in compiled_fn (compiled_fn)
V0120 14:51:10.960000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] return result
V0120 14:51:10.960000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_FAST result []
V0120 14:51:10.960000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
I0120 14:51:10.961000 72022 torch/_dynamo/logging.py:57] [0/0] Step 1: torchdynamo done tracing compiled_fn (RETURN_VALUE)
V0120 14:51:10.961000 72022 torch/_dynamo/symbolic_convert.py:2971] [0/0] RETURN_VALUE triggered compile
V0120 14:51:10.961000 72022 torch/_dynamo/output_graph.py:1004] [0/0] COMPILING GRAPH due to GraphCompileReason(reason='return_value', user_stack=[<FrameSummary file tests/compile/test_all.py, line 7 in compiled_fn>], graph_break=False)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] TRACED GRAPH
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] /home/user1/venv1/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] def forward(self, L_input_tensor_: "b8[2, 3, 4][12, 4, 1]cpu"):
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] l_input_tensor_ = L_input_tensor_
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] # File: tests/compile/test_all.py:5 in compiled_fn, code: output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] empty: "b8[2, 3][3, 1]cpu" = torch.empty((0,), dtype = torch.bool)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] output_tensor: "b8[2, 3][3, 1]cpu" = empty.to(device(type='cpu')); empty = None
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] # File: tests/compile/test_all.py:6 in compiled_fn, code: result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] result: "b8[2, 3][3, 1]cpu" = torch.all(l_input_tensor_, dim = 2, out = output_tensor); l_input_tensor_ = output_tensor = None
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] return (result,)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
I0120 14:51:10.968000 72022 torch/_dynamo/logging.py:57] [0/0] Step 2: calling compiler function inductor
V0120 14:51:12.792000 72022 torch/fx/experimental/symbolic_shapes.py:5201] [0/0] eval True == True [statically known]
I0120 14:51:22.070000 72022 torch/fx/experimental/symbolic_shapes.py:3646] [0/0] produce_guards
W0120 14:51:22.072000 72022 torch/_inductor/debug.py:434] [0/0] model__0_inference_0 debug trace: /home/user1/qnpu/env_name/src/torch_compile_debug/run_2025_01_20_14_51_10_921557-pid_72022/torchinductor/model__0_inference_0.0
I0120 14:51:22.076000 72022 torch/_dynamo/logging.py:57] [0/0] Step 2: done compiler function inductor
I0120 14:51:22.080000 72022 torch/fx/experimental/symbolic_shapes.py:3646] [0/0] produce_guards
V0120 14:51:22.080000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].size()[0] 2 None
V0120 14:51:22.081000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].size()[1] 3 None
V0120 14:51:22.081000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].size()[2] 4 None
V0120 14:51:22.081000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].stride()[0] 12 None
V0120 14:51:22.082000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].stride()[1] 4 None
V0120 14:51:22.082000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].stride()[2] 1 None
V0120 14:51:22.082000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].storage_offset() 0 None
V0120 14:51:22.083000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].size()[0] == 2
V0120 14:51:22.083000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].size()[1] == 3
V0120 14:51:22.084000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].size()[2] == 4
V0120 14:51:22.084000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].stride()[0] == 12
V0120 14:51:22.085000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].stride()[1] == 4
V0120 14:51:22.085000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].stride()[2] == 1
V0120 14:51:22.085000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].storage_offset() == 0
V0120 14:51:22.086000 72022 torch/_dynamo/guards.py:2314] [0/0] [__guards] GUARDS:
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards]
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] TREE_GUARD_MANAGER:
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] +- RootGuardManager
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:471 in init_ambient_guards
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- GuardManager: source=L['input_tensor'], accessed_by=DictGetItemGuardAccessor(input_tensor)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['input_tensor'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.bool, device=None, requires_grad=False, size=[2, 3, 4], stride=[12, 4, 1]) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['input_tensor'], '_dynamo_dynamic_indices') == False # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor(torch)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 139743351173376) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].all, accessed_by=GetAttrGuardAccessor(all)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].all, 139743348124352) # result = torch.all(input_tensor, dim=2, out=output_tensor) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:6 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].bool, accessed_by=GetAttrGuardAccessor(bool)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | | +- EQUALS_MATCH: G['torch'].bool == torch.bool # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].empty, accessed_by=GetAttrGuardAccessor(empty)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].empty, 139743348128512) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards]
V0120 14:51:22.088000 72022 torch/_dynamo/convert_frame.py:1234] skipping: _fn (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:51:22.089000 72022 torch/_dynamo/convert_frame.py:1234] skipping: _maybe_set_eval_frame (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:51:22.089000 72022 torch/_dynamo/convert_frame.py:1234] skipping: justknobs_check (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_utils_internal.py)
```
Full logs 2.6.0:
```
V0120 14:57:46.629000 74548 torch/_dynamo/convert_frame.py:1345] skipping: _is_skip_guard_eval_unsafe_stance (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
I0120 14:57:46.631000 74548 torch/_dynamo/utils.py:1162] [0/0] ChromiumEventLogger initialized with id 9bec8ac0-9067-4f58-ba32-04edd2949f59
V0120 14:57:46.632000 74548 torch/_dynamo/convert_frame.py:930] [0/0] torchdynamo start compiling compiled_fn tests/compile/test_all.py:3, stack (elided 5 frames):
V0120 14:57:46.632000 74548 torch/_dynamo/convert_frame.py:930] [0/0] File "tests/compile/test_all.py", line 14, in <module>
V0120 14:57:46.632000 74548 torch/_dynamo/convert_frame.py:930] [0/0] output = compiled_fn(input_tensor)
V0120 14:57:46.632000 74548 torch/_dynamo/convert_frame.py:930] [0/0]
I0120 14:57:46.633000 74548 torch/_dynamo/symbolic_convert.py:2706] [0/0] Step 1: torchdynamo start tracing compiled_fn tests/compile/test_all.py:3
I0120 14:57:46.634000 74548 torch/fx/experimental/symbolic_shapes.py:3192] [0/0] create_env
V0120 14:57:46.637000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:5 in compiled_fn (compiled_fn)
V0120 14:57:46.637000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0] [__trace_source] output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:57:46.638000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:57:46.640000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR empty [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.641000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_CONST (0,) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>)]
V0120 14:57:46.642000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1)]
V0120 14:57:46.642000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR bool [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.643000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('dtype',) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), ConstantVariable(dtype: torch.bool)]
V0120 14:57:46.643000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 2 [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), ConstantVariable(dtype: torch.bool), TupleVariable(length=1)]
V0120 14:57:46.655000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR to [TensorVariable()]
V0120 14:57:46.655000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_FAST input_tensor [GetAttrVariable(TensorVariable(), to)]
V0120 14:57:46.656000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR device [GetAttrVariable(TensorVariable(), to), LazyVariableTracker()]
V0120 14:57:46.656000 74548 torch/_dynamo/variables/builder.py:2853] [0/0] wrap_to_fake L['input_tensor'] (2, 3, 4) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None, None], constraint_strides=[None, None, None], view_base_context=None, tensor_source=LocalSource(local_name='input_tensor', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0120 14:57:46.658000 74548 torch/_dynamo/output_graph.py:2156] [0/0] create_graph_input L_input_tensor_ L['input_tensor'] FakeTensor(..., size=(2, 3, 4), dtype=torch.bool) at debug_level 0 before=False
V0120 14:57:46.659000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [GetAttrVariable(TensorVariable(), to), ConstantVariable(device: device(type='cpu'))]
V0120 14:57:46.660000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE STORE_FAST output_tensor [TensorVariable()]
V0120 14:57:46.661000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:6 in compiled_fn (compiled_fn)
V0120 14:57:46.661000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0] [__trace_source] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:57:46.661000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:57:46.662000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR all [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.662000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_FAST input_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>)]
V0120 14:57:46.663000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_CONST 2 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable()]
V0120 14:57:46.663000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_FAST output_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2)]
V0120 14:57:46.664000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('dim', 'out') [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2), TensorVariable()]
V0120 14:57:46.664000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 3 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2), TensorVariable(), TupleVariable(length=2)]
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] Graph break in user code at tests/compile/test_all.py:6
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] Reason: Unsupported: out variants with resizing on graph inputs
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] User code traceback:
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] File "tests/compile/test_all.py", line 6, in compiled_fn
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks]
I0120 14:57:46.668000 74548 torch/_dynamo/convert_frame.py:755] [0/0] Restarting analysis due to _dynamo/symbolic_convert.py:161 in fail_and_restart_analysis
I0120 14:57:46.669000 74548 torch/_dynamo/symbolic_convert.py:2706] [0/0_1] Step 1: torchdynamo start tracing compiled_fn tests/compile/test_all.py:3
I0120 14:57:46.670000 74548 torch/fx/experimental/symbolic_shapes.py:3192] [0/0_1] create_env
V0120 14:57:46.671000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0_1] [__trace_source] TRACE starts_line tests/compile/test_all.py:5 in compiled_fn (compiled_fn)
V0120 14:57:46.671000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0_1] [__trace_source] output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:57:46.671000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:57:46.672000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR empty [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.672000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_CONST (0,) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>)]
V0120 14:57:46.673000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_GLOBAL torch [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1)]
V0120 14:57:46.673000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR bool [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.674000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_CONST ('dtype',) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), ConstantVariable(dtype: torch.bool)]
V0120 14:57:46.674000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE CALL_FUNCTION_KW 2 [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), ConstantVariable(dtype: torch.bool), TupleVariable(length=1)]
V0120 14:57:46.675000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR to [TensorVariable()]
V0120 14:57:46.676000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_FAST input_tensor [GetAttrVariable(TensorVariable(), to)]
V0120 14:57:46.676000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR device [GetAttrVariable(TensorVariable(), to), LazyVariableTracker()]
V0120 14:57:46.677000 74548 torch/_dynamo/variables/builder.py:2853] [0/0_1] wrap_to_fake L['input_tensor'] (2, 3, 4) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None, None], constraint_strides=[None, None, None], view_base_context=None, tensor_source=LocalSource(local_name='input_tensor', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0120 14:57:46.678000 74548 torch/_dynamo/output_graph.py:2156] [0/0_1] create_graph_input L_input_tensor_ L['input_tensor'] FakeTensor(..., size=(2, 3, 4), dtype=torch.bool) at debug_level 0 before=False
V0120 14:57:46.679000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE CALL_FUNCTION 1 [GetAttrVariable(TensorVariable(), to), ConstantVariable(device: device(type='cpu'))]
V0120 14:57:46.680000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE STORE_FAST output_tensor [TensorVariable()]
V0120 14:57:46.681000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0_1] [__trace_source] TRACE starts_line tests/compile/test_all.py:6 in compiled_fn (compiled_fn)
V0120 14:57:46.681000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0_1] [__trace_source] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:57:46.681000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:57:46.681000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR all [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.682000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_FAST input_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>)]
V0120 14:57:46.682000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_CONST 2 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable()]
V0120 14:57:46.683000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_FAST output_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2)]
V0120 14:57:46.683000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_CONST ('dim', 'out') [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2), TensorVariable()]
V0120 14:57:46.684000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE CALL_FUNCTION_KW 3 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2), TensorVariable(), TupleVariable(length=2)]
V0120 14:57:46.684000 74548 torch/_dynamo/output_graph.py:972] [0/0_1] COMPILING GRAPH due to GraphCompileReason(reason='out variants with resizing on graph inputs', user_stack=[<FrameSummary file tests/compile/test_all.py, line 6 in compiled_fn>], graph_break=True)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1615] [0/0_1] REMOVE UNUSED GRAPHARG L['input_tensor']
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] TRACED GRAPH
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] ===== __compiled_fn_2 =====
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] /home/user1/venv1/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] def forward(self):
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] # File: tests/compile/test_all.py:5 in compiled_fn, code: output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] empty: "b8[0][1]cpu" = torch.empty((0,), dtype = torch.bool)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] output_tensor: "b8[0][1]cpu" = empty.to(device(type='cpu')); empty = None
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] return (output_tensor,)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code]
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code]
I0120 14:57:46.691000 74548 torch/_dynamo/output_graph.py:1458] [0/0_1] Step 2: calling compiler function inductor
W0120 14:57:48.602000 74548 torch/_inductor/debug.py:435] [0/0_1] model__0_inference_0 debug trace: /home/user1/qnpu/env_name/src/torch_compile_debug/run_2025_01_20_14_57_46_633319-pid_74548/torchinductor/model__0_inference_0.0
I0120 14:57:48.606000 74548 torch/_dynamo/output_graph.py:1463] [0/0_1] Step 2: done compiler function inductor
I0120 14:57:48.611000 74548 torch/fx/experimental/symbolic_shapes.py:4547] [0/0_1] produce_guards
V0120 14:57:48.612000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].size()[0] 2 None
V0120 14:57:48.612000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].size()[1] 3 None
V0120 14:57:48.612000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].size()[2] 4 None
V0120 14:57:48.613000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].stride()[0] 12 None
V0120 14:57:48.613000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].stride()[1] 4 None
V0120 14:57:48.613000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].stride()[2] 1 None
V0120 14:57:48.614000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].storage_offset() 0 None
V0120 14:57:48.614000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].size()[0] == 2
V0120 14:57:48.615000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].size()[1] == 3
V0120 14:57:48.615000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].size()[2] == 4
V0120 14:57:48.616000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].stride()[0] == 12
V0120 14:57:48.616000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].stride()[1] == 4
V0120 14:57:48.616000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].stride()[2] == 1
V0120 14:57:48.617000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].storage_offset() == 0
V0120 14:57:48.617000 74548 torch/_dynamo/guards.py:2364] [0/0_1] [__guards] GUARDS:
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards]
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] TREE_GUARD_MANAGER:
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] +- RootGuardManager
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:493 in init_ambient_guards
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- GuardManager: source=L['input_tensor'], accessed_by=DictGetItemGuardAccessor('input_tensor')
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | +- TENSOR_MATCH: check_tensor(L['input_tensor'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.bool, device=None, requires_grad=False, size=[2, 3, 4], stride=[12, 4, 1]) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | +- NO_HASATTR: hasattr(L['input_tensor'], '_dynamo_dynamic_indices') == False # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor('torch')
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 139725124415584) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | +- GuardManager: source=G['torch'].all, accessed_by=GetAttrGuardAccessor(all)
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].all, 139725121374464) # result = torch.all(input_tensor, dim=2, out=output_tensor) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:6 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | +- GuardManager: source=G['torch'].bool, accessed_by=GetAttrGuardAccessor(bool)
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | | +- EQUALS_MATCH: G['torch'].bool == torch.bool # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | +- GuardManager: source=G['torch'].empty, accessed_by=GetAttrGuardAccessor(empty)
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].empty, 139725121378624) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards]
V0120 14:57:49.619000 74548 torch/_dynamo/guards.py:2346] [0/0_1] [__guards] Guard eval latency = 0.76 us
I0120 14:57:49.620000 74548 torch/_dynamo/pgo.py:636] [0/0_1] put_code_state: no cache key, skipping
V0120 14:57:49.626000 74548 torch/_dynamo/convert_frame.py:1345] skipping: _fn (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:57:49.627000 74548 torch/_dynamo/convert_frame.py:1345] skipping: _callback_from_stance (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:57:49.627000 74548 torch/_dynamo/convert_frame.py:1345] skipping: _maybe_set_eval_frame (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:57:49.628000 74548 torch/_dynamo/convert_frame.py:1345] skipping: justknobs_check (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_utils_internal.py)
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] torchdynamo start compiling torch_dynamo_resume_in_compiled_fn_at_6 tests/compile/test_all.py:6, stack (elided 5 frames):
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] File "tests/compile/test_all.py", line 14, in <module>
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] output = compiled_fn(input_tensor)
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] File "/home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] return fn(*args, **kwargs)
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0]
I0120 14:57:49.630000 74548 torch/_dynamo/symbolic_convert.py:2706] [1/0] Step 1: torchdynamo start tracing torch_dynamo_resume_in_compiled_fn_at_6 tests/compile/test_all.py:6
I0120 14:57:49.631000 74548 torch/fx/experimental/symbolic_shapes.py:3192] [1/0] create_env
V0120 14:57:49.632000 74548 torch/_dynamo/symbolic_convert.py:932] [1/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:6 in torch_dynamo_resume_in_compiled_fn_at_6 (compiled_fn)
V0120 14:57:49.632000 74548 torch/_dynamo/symbolic_convert.py:932] [1/0] [__trace_source] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:57:49.632000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE LOAD_FAST ___stack0 []
V0120 14:57:49.633000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE JUMP_ABSOLUTE 42 [LazyVariableTracker()]
V0120 14:57:49.633000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE STORE_FAST result [LazyVariableTracker()]
V0120 14:57:49.634000 74548 torch/_dynamo/variables/builder.py:2853] [1/0] wrap_to_fake L['___stack0'] (2, 3) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None], constraint_strides=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='___stack0', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0120 14:57:49.635000 74548 torch/_dynamo/output_graph.py:2156] [1/0] create_graph_input L_stack0_ L['___stack0'] FakeTensor(..., size=(2, 3), dtype=torch.bool) at debug_level 0 before=False
V0120 14:57:49.637000 74548 torch/_dynamo/symbolic_convert.py:932] [1/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:7 in torch_dynamo_resume_in_compiled_fn_at_6 (compiled_fn)
V0120 14:57:49.637000 74548 torch/_dynamo/symbolic_convert.py:932] [1/0] [__trace_source] return result
V0120 14:57:49.637000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE LOAD_FAST result []
V0120 14:57:49.637000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0120 14:57:49.638000 74548 torch/_dynamo/convert_frame.py:768] [1/0] Skipping frame because no content in function call torch_dynamo_resume_in_compiled_fn_at_6 tests/compile/test_all.py 6
I0120 14:57:49.638000 74548 torch/_dynamo/pgo.py:636] [1/0] put_code_state: no cache key, skipping
I0120 14:57:49.644000 74548 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0120 14:57:49.644000 74548 torch/_dynamo/eval_frame.py:398] * compiled_fn tests/compile/test_all.py:3
I0120 14:57:49.644000 74548 torch/_dynamo/eval_frame.py:398] ]
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gitc15b011
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0a0+gitc15b011
[pip3] torch_tb_profiler==0.4.0
[pip3] triton==3.1.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
| true
|
2,799,317,705
|
`torch.compile` may produce wrong result with `Linear+MaxPool2d+BatchNorm2d`.
|
Zoeeeeey
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 4
|
NONE
|
### 🐛 Describe the bug
Hi! I found that the following model gives different results after compile.
```python
import torch
def fn():
v4_0 = torch.nn.Parameter(torch.randn([8, 1, 4, 1], dtype=torch.float32), requires_grad=True)
v5_0 = torch.nn.Parameter(torch.empty([1, 1, 4, 1], dtype=torch.float32), requires_grad=True)
v6_0 = torch.cat((v4_0, v5_0), dim=0)
v6_0_flat = v6_0.view(-1, 1) # 展平并调整形状
linear_layer = torch.nn.Linear(in_features=1, out_features=43, bias=True)
v2_0 = linear_layer(v6_0_flat)
v2_0_unsqueezed = v2_0.unsqueeze(0).unsqueeze(0) # 添加批次和通道维度以满足 MaxPool2d 的输入要求
maxpool_layer = torch.nn.MaxPool2d(kernel_size=(2, 42), stride=2, padding=0, dilation=1, ceil_mode=False)
v1_0 = maxpool_layer(v2_0_unsqueezed)
batchnorm_layer = torch.nn.BatchNorm2d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
v0_0 = batchnorm_layer(v2_0_unsqueezed)
return v1_0, v0_0
ret_eager = fn()
compiled = torch.compile(fn)
ret_compiled = compiled()
# assert torch.allclose(ret_eager[0], ret_compiled[0]), '\n'.join(map(str, ["", ret_eager[0], ret_compiled[0]]))
# assert torch.allclose(ret_eager[1], ret_compiled[1]), '\n'.join(map(str, ["", ret_eager[1], ret_compiled[1]]))
torch.testing.assert_close(ret_eager[0], ret_compiled[0])
# OUTPUT:
# AssertionError: Tensor-likes are not close!
#
# Mismatched elements: 18 / 18 (100.0%)
# Greatest absolute difference: nan at index (0, 0, 16, 0) (up to 1e-05 allowed)
# Greatest relative difference: nan at index (0, 0, 16, 0) (up to 1.3e-06 allowed)
torch.testing.assert_close(ret_eager[1], ret_compiled[1])
# OUTPUT:
# AssertionError: Tensor-likes are not close!
#
# Mismatched elements: 1548 / 1548 (100.0%)
# Greatest absolute difference: nan at index (0, 0, 0, 0) (up to 1e-05 allowed)
# Greatest relative difference: nan at index (0, 0, 0, 0) (up to 1.3e-06 allowed)
```
### Error logs
``` Python
# AssertionError: Tensor-likes are not close!
#
# Mismatched elements: 18 / 18 (100.0%)
# Greatest absolute difference: nan at index (0, 0, 16, 0) (up to 1e-05 allowed)
# Greatest relative difference: nan at index (0, 0, 16, 0) (up to 1.3e-06 allowed)
#...
# AssertionError: Tensor-likes are not close!
#
# Mismatched elements: 1548 / 1548 (100.0%)
# Greatest absolute difference: nan at index (0, 0, 0, 0) (up to 1e-05 allowed)
# Greatest relative difference: nan at index (0, 0, 0, 0) (up to 1.3e-06 allowed)
```
### Versions
```bash
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.7.0.dev20250116+cpu
[pip3] torchaudio==2.6.0.dev20250116+cpu
[pip3] torchvision==0.22.0.dev20250116+cpu
[conda] numpy 2.2.1 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cpu pypi_0 pypi
```
cc @chauhang @penguinwu
| true
|
2,799,241,509
|
getting different results when adding `torch.Tensor` or python number to a DTensor - Is that expected?
|
thevasudevgupta
|
open
|
[
"oncall: distributed",
"module: dtensor"
] | 3
|
NONE
|
### 🐛 Describe the bug
```python
# torchrun --nproc-per-node 2 scripts/dtensor.py
import os
import torch
from torch.distributed.tensor import init_device_mesh, Shard, distribute_tensor
use_tensor = False
rank = int(os.getenv("RANK"))
world_size = int(os.getenv("WORLD_SIZE"))
torch.manual_seed(0)
tensor1 = torch.rand(1000, 88)
mesh = init_device_mesh("cpu", (world_size,))
norm1 = torch.linalg.vector_norm(tensor1)
norm1 += torch.tensor(2) if use_tensor else 2
print(f"{norm1}\n")
tensor2 = distribute_tensor(tensor1, mesh, [Shard(dim=0)])
norm2 = torch.linalg.vector_norm(tensor2)
norm2 += torch.tensor(2) if use_tensor else 2
print(f"{norm2.full_tensor()}\n")
```
setting `use_tensor = False` gives different results - is that expected?
`use_tensor = True` works fine and gives same results;
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.30.2
Libc version: N/A
Python version: 3.10.8 (main, Nov 24 2022, 08:08:27) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.25.2
[pip3] pytorch-lightning==2.0.1.post0
[pip3] torch==2.5.1
[pip3] torchaudio==2.0.0.dev20230302
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.15.2
[pip3] torchvision==0.19.0
[conda] numpy 1.25.2 pypi_0 pypi
[conda] pytorch-lightning 2.0.1.post0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230302 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,799,189,123
|
DISABLED test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_False (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35863944744).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 146, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 7 but got 14.
Absolute difference: 7
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,799,054,463
|
[ARM] - test_quantized_module.py test_lstm_api fails on Aarch64
|
robert-hardwick
|
closed
|
[
"oncall: quantization",
"module: arm"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
We are seeing test_lstm_api in test_quantized_module.py fail on Aarch64. It is currently not enabled in CI - we would like to enable this.
This happens due to change of input dimensions here -https://github.com/pytorch/pytorch/blob/92b9da1fc2b0a834f54f4d97fd4a2402f47bce07/test/quantization/core/test_quantized_module.py#L1758
causes cache miss and implementation falls back to default_lowp_kind.
```
AIL: test_lstm_api (__main__.TestDynamicQuantizedModule)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2979, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_quantized.py", line 171, in test_fn
for qengine in supported_qengines:
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/hypothesis/core.py", line 1145, in wrapped_test
raise the_error_hypothesis_found
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_quantized.py", line 174, in test_fn
qfunction(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/quantization/core/test_quantized_module.py", line 1760, in test_lstm_api
self.check_eager_serialization(cell_dq, ref_dq, [x])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_quantization.py", line 674, in check_eager_serialization
check_outputs(ref_out, load_out)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_quantization.py", line 667, in check_outputs
self.assertEqual(ref_out[0], load_out[0])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3885, in assertEqual
raise error_metas.pop()[0].to_error(
AssertionError: Tensor-likes are not close!
Mismatched elements: 1400 / 1400 (100.0%)
Greatest absolute difference: 1.1401878595352173 at index (8, 18, 6) (up to 1e-05 allowed)
Greatest relative difference: 5944.72802734375 at index (4, 4, 6) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
python test/quantization/core/test_quantized_module.py TestDynamicQuantizedModule.test_lstm_api
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 46 tests in 45.840s
FAILED (failures=1, skipped=4)
```
Fixed in https://github.com/pytorch/pytorch/pull/135058
### Versions
jenkins@73bf36410487:~/workspace$ python3 collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.5.1
[conda] No relevant packages
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @malfet @snadampal @milpuz01
| true
|
2,798,892,131
|
solve apl dependency issue
|
alinpahontu2912
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
According to the [APL documentation](https://developer.arm.com/documentation/101004/2404/General-information/Arm-Performance-Libraries-example-programs), libraries ending with _mp are OpenMP multi-threaded libraries.
When a project is compiled with MSVC and the -openmp flag, the vcomp library (Visual C++ implementation of OpenMP) is used for runtime calls.
However, the current APL implementation uses the libomp.dll (LLVM) variant.
As a result, there are unexpected behaviors at runtime.
---
For Example:
```python
import torch
# Create a sparse tensor
# Input (Sparse Tensor):
# [[0, 1],
# [1, 0]]
indices = torch.tensor([[0, 1], [1, 0]])
values = torch.tensor([1, 1], dtype=torch.float32)
size = torch.Size([2, 2])
sparse_tensor = torch.sparse_coo_tensor(indices, values, size)
# Convert sparse tensor to dense tensor
dense_tensor = sparse_tensor.to_dense()
# Expected Output (Dense Tensor):
# [[0, 1],
# [1, 0]]
print("\nDense Tensor:")
print(dense_tensor)
```
However, it prints unexpected outputs such as:
```python
# [[0, 11],
# [10, 0]]
```
The issue arises because the following code does not function as expected at runtime:
https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/ParallelOpenMP.h#L30
```c++
// returns 1 , however since OpenMP is enabled it should return total number of threads
int64_t num_threads = omp_get_num_threads();
```
---
In the runtime, loading multiple OpenMP libraries (in this case `libomp` and `vcomp`) is causing unexpected behaviours.
So, we've changed libraries from `_mp` to non `_mp` versions and we used `vcomp` for OpenMP calls.
| true
|
2,798,884,722
|
Nested tensor support for pointwise matrix multiplication of nested tensor and normal tensor
|
kkj15dk
|
open
|
[
"triaged",
"module: nestedtensor"
] | 9
|
NONE
|
### 🚀 The feature, motivation and pitch
I am using nested tensors (jagged layout) for my input data, and I need to apply rotary positional embeddings to qkv vectors.
At the moment I cannot see how to do this efficiently. I've landed on this slow list comprehension (see below), where I am slicing the normal tensor using and multiplying with the elements of the nested tensor.
```
def rotate_half(x):
# x1, x2 = x[..., : x.shape[-1] // 2], x[..., x.shape[-1] // 2 :] # old implementation
x1, x2 = x.chunk(2, dim= -1)
return torch.cat(
(-x2, x1), dim=-1
)
# @torch.jit.script # TODO: I don't think this is supported for torchscript with nested tensors
# def _apply_rotary_pos_emb_torchscript(qkv, cos, sin):
def _apply_rotary_pos_emb(qkv, cos, sin): # qkv shape: (B, j1, 3, n_heads, head_dim), cos & sin shape: (1, j1.max(), 1, head_dim)
if qkv.is_nested:
cos = cos.squeeze(0)
sin = sin.squeeze(0)
# slow list comprehension
result_list = [(t * cos[:t.shape[0]]) + (rotate_half(t) * sin[:t.shape[0]]) for t in qkv.unbind()]
# Reassemble the list of tensors back into a nested tensor
return torch.nested.as_nested_tensor(result_list)
return (qkv * cos) + (rotate_half(qkv) * sin)
```
### Alternatives
You could convert the cos and sin tensors to nested tensors of the same shape as qkv, and multiply these, but this does also not seem like an optimal solution, and requires copying the cos and sin vectors as much as we have batch size.
There might be some way of applying rotary positional embeddings to nested tensors that I haven't thought of. If so, please let me know!
### Additional context
I am working on a project utilizing protein sequences as input data. The data varies widely in sequence length. min sequence length is probably 32 tokens, the max is whatever I set the max length to be, probably 4096 tokens. I am using layout=torch.jagged at the moment, as this seem to be the best format to
It's the perfect project for nested tensors, but so far, FlashAttention, Rotary positional embeddings, and loss calculations are proving to be difficult to implement with efficient computations
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,798,740,877
|
Significant precision error from torch.compile
|
Edenzzzz
|
open
|
[
"needs reproduction",
"triaged",
"module: correctness (silent)",
"bug",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
### 🐛 Describe the bug
When wrapping torch.compile around a forward region of a model (both `reduce-overhead` and `max-autotune-no-cudagraphs`), the speed-up is accompanied by significant precision error. This happens even when wrapping around the smallest op as shown below. After enabling `CUDA_LAUNCH_BLOCKING=1`, the precision error is gone.
It would be troublesome to provide a minimun reproducier as this is an ongoing project involving large model block dependencies, but can also try if needed.


### Profile trace of pure cuda graph showing perf. benefits but also incurring error
Even though `reduce-overhead` is used, Triton kernel fusion(the purple region) still cuts in, which might be causing the error.

### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250109+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.127.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.5.1
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] optree==0.13.1
[pip3] pynvjitlink==0.2.3
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250109+cu124
[pip3] torch-tensorrt==2.5.0a0
[pip3] torchaudio==2.6.0.dev20250109+cu124
[pip3] torchvision==0.22.0.dev20250109+cu124
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @mcarilli @eellison @BoyuanFeng
| true
|
2,798,732,377
|
DISABLED test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35856926954).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 266, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,798,732,275
|
DISABLED test_aoti (__main__.TestMemoryPlanning)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti&suite=TestMemoryPlanning&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35856927508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_memory_planning.py", line 113, in test_aoti
).run(
RuntimeError: Expected to find "int64_t int_array_2[] = {24L + align(12L*s0), };" but did not find it
Searched string:
Auto-tuning code written to /tmp/tmp92c6h0z4/tmp0ptwdcmx.py
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
Output code:
From CHECK: int64_t int_array_2[] = {24L + align(12L*s0), };
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_memory_planning.py TestMemoryPlanning.test_aoti
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_memory_planning.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,798,732,162
|
DISABLED test_reorder_peak_memory_lpmf (__main__.TestOperatorReorderForPeakMemory)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reorder_peak_memory_lpmf&suite=TestOperatorReorderForPeakMemory&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35856927699).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reorder_peak_memory_lpmf`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_memory.py", line 114, in test_reorder_peak_memory_lpmf
.run(code)
RuntimeError: Expected to find "buf0 = " but did not find it
Searched string:
extern_kernels.mm(primals_2, primals_3, out=buf2)
del primals_3
buf1 = empty_strided_cuda((2048, 12), (12, 1), torch.float32)
# Topologically Sorted Source Nodes: [t1], Original ATen: [aten.mm]
extern_kernels.mm(primals_2, buf0, out=buf1)
del buf0
buf3 = empty_strided_cuda((2048, 1), (1, 1), torch.float32)
# Topologically Sorted Source Nodes: [t3], Original ATen: [aten.mm]
extern_kernels.mm(reinterpret_tensor(buf1, (2048, 10), (12, 1), 0), primals_4, out=buf3)
buf6 = empty_strided_cuda((), (), torch.float32)
# Topologically Sorted Source Nodes: [sum_1], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_1.run(buf3, buf6, 1, 2048, grid=grid(1), stream=stream0)
del buf3
buf5 = empty_strided_cuda((2048, 12), (12, 1), torch.float32)
# Topologically Sorted Source Nodes: [t4], Original ATen: [aten.mm]
extern_kernels.mm(buf2, buf4, out=buf5)
del buf4
buf7 = empty_strided_cuda((3, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [sum_2], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_2.run(buf5, buf7, 3, 6827, grid=grid(3), stream=stream0)
del buf5
buf9 = buf6; del buf6 # reuse
# Topologically Sorted Source Nodes: [sum_2, add], Original ATen: [aten.sum, aten.add]
stream0 = get_raw_stream(0)
triton_per_fused_add_sum_3.run(buf9, buf7, 1, 3, grid=grid(1), stream=stream0)
del buf7
return (buf9, primals_2, reinterpret_tensor(buf2, (1, 2048), (1, 1), 0), reinterpret_tensor(primals_5, (10, 1), (1, 10), 0), reinterpret_tensor(buf1, (10, 2048), (1, 12), 0), reinterpret_tensor(primals_4, (1, 10), (1, 1), 0), )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
primals_1 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
primals_2 = rand_strided((2048, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_3 = rand_strided((1, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_4 = rand_strided((10, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_5 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
fn = lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: buf0 =
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_memory.py TestOperatorReorderForPeakMemory.test_reorder_peak_memory_lpmf
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_memory.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,798,675,566
|
Fix incorrect citation of authors in documentation
|
kyo-takano
|
closed
|
[
"open source",
"Merged",
"Stale",
"ciflow/trunk",
"release notes: optim"
] | 12
|
CONTRIBUTOR
|
This PR corrects the citation of Adafactor authors "Noam Shazeer" and "Mitchell Stern" in the documentation.
The current text incorrectly lists them as "Shazeer, Noam, and Mitchell Stern," which seems to be a result of a data parsing issue of some reference manager(s) [as you can find many papers with the same issue](https://www.google.com/search?q=%22Shazeer%2C+Noam%2C+and+Mitchell+Stern%22).
The updated citation follows standard conventions for author names.
| true
|
2,798,636,562
|
Some FlexAttention learned bias bugs/limitations
|
Chillee
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
## Ex 1
```Python
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask, create_mask
torch.set_default_device('cuda')
flex_attention = torch.compile(flex_attention, dynamic=False)
result = torch.randn((), requires_grad=True)
def score_mod(score, b, h, q, kv):
return score * result
S = 8192
torch.manual_seed(0)
q, k, v = [torch.randn(1, 1, S, 64, dtype=torch.float16, requires_grad=True) for _ in range(3)]
flex_attention(q, k, v, score_mod=score_mod).sum().backward()
```
```Shell
File "/home/chilli/local/pytorch/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/graph.py", line 1147, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/chilli/local/pytorch/torch/_inductor/graph.py", line 1137, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/lowering.py", line 452, in wrapped
out = decomp_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/kernel/flex_attention.py", line 2226, in flex_attention_backward
joint_outputs = process_joint_outputs(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/kernel/flex_attention.py", line 2103, in process_joint_outputs
grads_out = [get_out(x) for x in other_grads]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/kernel/flex_attention.py", line 2103, in <listcomp>
grads_out = [get_out(x) for x in other_grads]
^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/kernel/flex_attention.py", line 2100, in get_out
assert buf.name is not None
^^^^^^^^^^^^^^^^^^^^
torch._inductor.exc.LoweringException: AssertionError:
target: flex_attention_backward
args[0]: TensorBox(StorageBox(
InputBuffer(name='primals_1', layout=FixedLayout('cuda:0', torch.float16, size=[1, 1, 8192, 64], stride=[524288, 524288, 64, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='primals_2', layout=FixedLayout('cuda:0', torch.float16, size=[1, 1, 8192, 64], stride=[524288, 524288, 64, 1]))
))
args[2]: TensorBox(StorageBox(
```
## Ex 2
```Python
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask, create_mask
torch.set_default_device('cuda')
flex_attention = torch.compile(flex_attention, dynamic=False)
result = torch.randn((1,), requires_grad=True)
def score_mod(score, b, h, q, kv):
return score * result[score.new_zeros((), dtype=torch.int)]
S = 8192
torch.manual_seed(0)
q, k, v = [torch.randn(1, 1, S, 64, dtype=torch.float16, requires_grad=True) for _ in range(3)]
flex_attention(q, k, v, score_mod=score_mod).sum().backward()
```
```Shell
Traceback (most recent call last):
File "/home/chilli/.conda/envs/py311/lib/python3.11/site-packages/triton/language/core.py", line 35, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/chilli/.conda/envs/py311/lib/python3.11/site-packages/triton/language/core.py", line 1268, in broadcast_to
return semantic.broadcast_impl_shape(input, shape, _builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/.conda/envs/py311/lib/python3.11/site-packages/triton/language/semantic.py", line 732, in broadcast_impl_shape
raise ValueError(f"Cannot broadcast, rank mismatch: {src_shape}, {shape}")
ValueError: Cannot broadcast, rank mismatch: [1], [64, 64]
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 96:33:
if CHECK_BLOCK_BOUNDARY:
grad_scores = tl.where(offs_n2[None, :] < KV_LEN, grad_scores, 0.0)
# ~~~~~~~~~~~~~~~~~~~ Apply other buffer grad writes ~~~~~~~~~~~~~
if WRITE_DQ:
scatter_mask = offs_m2[:, None] < Q_LEN and offs_n2[None, :] < KV_LEN
tmp12 = tl.full([1], 0, tl.int32)
tmp13 = (ds)
tmp14 = (pre_mod_scores)
tmp15 = tmp13 * tmp14
tmp16 = tmp15.to(tl.float32)
tl.atomic_add(in_ptr17 + tl.broadcast_to(tmp12, tmp16.shape), tmp16, scatter_mask, sem='relaxed')
^
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 79:17:
dq = bwd_dq_block_mn(
arg_Q, arg_K, arg_V, arg_LSE, arg_DELTA, arg_DO, arg_DQ, arg_DV, arg_KV_NUM_BLKS, arg_KV_IDX, arg_Q_NUM_BLKS, arg_Q_IDX, arg_FULL_KV_NUM_BLKS, arg_FULL_KV_IDX, arg_FULL_Q_NUM_BLKS, arg_FULL_Q_IDX, in_ptr16, in_ptr17, out_ptr0,
dq, q, kT_ptrs, vT_ptrs, do, Di, lse, Q_LEN, KV_LEN,
off_z, off_hq, offs_m2, offs_n2,
stride_kn, stride_kd, stride_vn, stride_vd,
kv_indices, sparse_kv_num_blocks,
MATMUL_PRECISION, RCP_LN2,
IS_FULL_BLOCKS, CHECK_BLOCK_BOUNDARY=True,
)
```
### Versions
N/A
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @drisspg @yanboliang @BoyuanFeng
| true
|
2,798,540,327
|
When using `torch.jit.trace` with `Linear+MaxPool2d+BatchNorm2d`, different results are observed.
|
Zoeeeeey
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
Hi! I found that the following model gives different results after using `torch.jit.trace`.
Are there any bugs in this process?
```python
import numpy as np
import torch
import torch.nn as nn
class SymbolNet(nn.Module):
def __init__(self):
super(SymbolNet, self).__init__()
self.m3 = nn.Linear(in_features=1, out_features=43, bias=True)
self.m4 = nn.MaxPool2d(kernel_size=(2, 42), stride=2, padding=0, dilation=1, ceil_mode=False)
self.m5 = nn.BatchNorm2d(1, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
def forward(self, x):
x = self.m3(x)
x = self.m4(x)
x = self.m5(x)
return x
model = SymbolNet()
inp = np.random.rand(24, 1, 4, 1).astype('float32')
m_out = model(torch.from_numpy(inp).to('cpu'))
m_out = [v.cpu().detach() for v in m_out] # torch2numpy
m_out = [v.resolve_conj().numpy() if v.is_conj() else v.numpy() for v in m_out]
# Compile the model
opt = torch.jit.trace(model.eval(), torch.from_numpy(inp).to('cpu'))
# Compiled run
opt_out = opt(torch.from_numpy(inp).to('cpu'))
opt_out = [v.cpu().detach() for v in opt_out]
opt_out = [v.resolve_conj().numpy() if v.is_conj() else v.numpy() for v in opt_out]
# Differential testing
for i, (l, r) in enumerate(zip(m_out, opt_out)):
np.testing.assert_allclose(l, r, rtol=1e-2, atol=1e-3, err_msg=f"Result mismatch @ index {i}")
```
Output:
```python
# AssertionError:
# Not equal to tolerance rtol=0.01, atol=0.001
# Result mismatch @ index 0
# Mismatched elements: 2 / 2 (100%)
# Max absolute difference among violations: 2.5436974
# Max relative difference among violations: 2.408335
# ACTUAL: array([[[-0.560976],
# [-1.487492]]], dtype=float32)
# DESIRED: array([[[1.169456],
# [1.056206]]], dtype=float32)
```
### Versions
```bash
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.7.0.dev20250116+cpu
[pip3] torchaudio==2.6.0.dev20250116+cpu
[pip3] torchvision==0.22.0.dev20250116+cpu
[conda] numpy 2.2.1 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cpu pypi_0 pypi
```
| true
|
2,798,443,871
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 6
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,798,351,514
|
CI test: TestAutograd.test_gradcheck_nondeterministic
|
yanboliang
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145205
```
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_autograd.py TestAutograd.test_gradcheck_nondeterministic
```
| true
|
2,798,346,582
|
[CI][CUDA][Dynamic Shape] xfail: DynamicShapesCodegenGPUTests.test_linspace4_dynamic_shapes_cuda
|
nWEIdia
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_linspace4_dynamic_shapes_cuda
failed to generate triton kernels, causing assert failures on 2x H100 systems (and 2x Grace H100 systems).
Failures like below:
Finline_call [] stats [('calls_captured', 1), ('unique_graphs', 1)]
inductor [('fxgraph_cache_miss', 1)]
aot_autograd [('total', 1), ('autograd_cache_miss', 1), ('autograd_cache_saved', 1), ('ok', 1)]
FAIL: test_linspace4_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests.test_linspace4_dynamic_shapes_cuda) [61/1892]---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python3.12/dist-packages/torch/testing/_internal/common_utils.py", line 3114, in wrapper
method(*args, **kwargs)
File "/opt/pytorch/pytorch/test/inductor/test_torchinductor.py", line 12212, in new_test
return value(self)
^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/testing.py", line 420, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/pytorch/pytorch/test/inductor/test_torchinductor.py", line 2603, in test_linspace4
self.common(fn, (torch.Tensor([]),))
File "/opt/pytorch/pytorch/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 424, in common
return check_codegen(
^^^^^^^^^^^^^^
File "/opt/pytorch/pytorch/test/inductor/test_torchinductor_codegen_dynamic_shapes.py", line 82, in check_codegen
self.assertTrue("def triton" in code, f"Failed to find triton kernel\n{code}")
AssertionError: False is not true : Failed to find triton kernel
# AOT ID: ['0_inference'] [42/1892]from ctypes import c_void_p, c_long, c_int
import torch
import math
import random
import os
import tempfile
from math import inf, nan
from torch._inductor.hooks import run_intermediate_hooks
from torch._inductor.utils import maybe_profile
from torch._inductor.codegen.memory_planning import _align as align
from torch import device, empty_strided
from torch._inductor.async_compile import AsyncCompile
from torch._inductor.select_algorithm import extern_kernels
from torch._inductor.codegen.multi_kernel import MultiKernelCall
aten = torch.ops.aten
inductor_ops = torch.ops.inductor
_quantized = torch.ops._quantized
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
empty_strided_cpu = torch._C._dynamo.guards._empty_strided_cpu
empty_strided_cuda = torch._C._dynamo.guards._empty_strided_cuda
empty_strided_xpu = torch._C._dynamo.guards._empty_strided_xpu
reinterpret_tensor = torch._C._dynamo.guards._reinterpret_tensor
alloc_from_pool = torch.ops.inductor._alloc_from_pool
async_compile = AsyncCompile()
empty_strided_p2p = torch._C._distributed_c10d._SymmetricMemory.empty_strided_p2p
async_compile.wait(globals())
del async_compile
def call(args):
with torch.cuda._DeviceGuard(1):
torch.cuda.set_device(1)
buf0 = empty_strided_cuda((0, ), (1, ), torch.float32)
return (buf0, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
fn = lambda: call([])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_linspace4_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @atalman @malfet @ptrblck @eqy @tinglvv
| true
|
2,798,291,858
|
Indexed ^= (XOR in-place) operation doesn't work as expected on MPS backend
|
TrevorPeyton
|
closed
|
[
"high priority",
"triaged",
"module: regression",
"module: correctness (silent)",
"module: mps"
] | 1
|
NONE
|
### 🐛 Describe the bug
The ^= (XOR in-place) operation produces incorrect results on the MPS backend. The behavior is inconsistent with other backends, such as CPU. Specifically, the operation appears to modify unintended values in the tensor.
```
import torch
# On CPU
zeros = torch.zeros((10, 2), dtype=torch.int16, device="cpu")
zeros[:, 0] ^= 1
print(zeros) # Expected and correct output:
# tensor([[1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0]], dtype=torch.int16)
# On MPS
zeros = torch.zeros((10, 2), dtype=torch.int16, device="mps")
zeros[:, 0] ^= 1
print(zeros) # Incorrect output:
# tensor([[1, 1],
# [1, 1],
# [1, 1],
# [1, 1],
# [1, 1],
# [0, 0],
# [0, 0],
# [0, 0],
# [0, 0],
# [0, 0]], device='mps:0', dtype=torch.int16)
# Non-in-place workaround
zeros = torch.zeros((10, 2), dtype=torch.int16, device="mps")
zeros[:, 0] = zeros[:, 0] ^ 1
print(zeros) # Correct output:
# tensor([[1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0]], device='mps:0', dtype=torch.int16)
```
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:35:20) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] onnx==1.17.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[conda] numpy 2.1.2 py312h801f5e3_0 conda-forge
[conda] pytorch 2.5.1 py3.12_0 pytorch
[conda] torchaudio 2.5.1 py312_cpu pytorch
[conda] torchvision 0.20.1 py312_cpu pytorch
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,798,234,888
|
PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: foreach_frontend",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145202
See #145101 for details.
| true
|
2,798,233,149
|
PEP585 update - torch/utils
|
aorenste
|
closed
|
[
"oncall: jit",
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: foreach_frontend",
"topic: not user facing",
"suppress-bc-linter"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145201
See #145101 for details.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mingfeima @XiaobingSuper @ashokei @jingxu10
| true
|
2,798,231,873
|
PEP585 update - torch/testing
|
aorenste
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: distributed (rpc)",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145200
See #145101 for details.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,798,230,013
|
PEP585 update - torch/ao
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx",
"release notes: AO frontend"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145199
See #145101 for details.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,798,228,958
|
PEP585 update - torch/_inductor
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"suppress-bc-linter"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145198
See #145101 for details.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,798,088,684
|
Use std::string_view in get_fully_qualified_type_name
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
COLLABORATOR
|
The same as #139164 but open a new PR due to messy history there.
| true
|
2,798,084,002
|
Guard size oblivious within empty_tensor_restride_symint
|
bobrenjc93
|
closed
|
[
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145196
* #145047
* #143961
| true
|
2,798,062,786
|
[CI][CUDA][Distributed][FSDP] Remove hardcoded world size of 2
|
nWEIdia
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
as these unit tests would fail if run
on a single GPU (i.e**. skip_if_lt_x_gpu(2)) seems to view world size as 2 even on platforms with 1 GPU.**
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @atalman @malfet @ptrblck @eqy @tinglvv
| true
|
2,797,985,569
|
Add transpose support for CppMicroGemmFP32Vec
|
CaoE
|
closed
|
[
"module: cpu",
"open source",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 1
|
COLLABORATOR
|
* Add transposed B support for CppMicroGemmFP32Vec
* Add support for arbitrary N size
Expand CppMicroGemmFP32Vec to generate gemm kernel that supports transposed B and N of arbitrary size.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,946,135
|
DISABLED test_reuse_kernel_cuda (__main__.AOTInductorTestABICompatibleGpu)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reuse_kernel_cuda&suite=AOTInductorTestABICompatibleGpu&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845021672).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reuse_kernel_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 12376, in new_test
return value(self)
File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor.py", line 1824, in test_reuse_kernel
self.code_check_count(
File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor_utils.py", line 245, in code_check_count
).run(src_code)
RuntimeError: Expected to find "triton_poi_fused_sin_0 = loadKernel(" but did not find it
Searched string:
#include <torch/csrc/inductor/aoti_runtime/interface.h>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
#include <torch/csrc/inductor/aoti_runtime/model.h>
// Definition of AOTI runtime interface functions
From CHECK-COUNT-1: triton_poi_fused_sin_0 = loadKernel(
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_aot_inductor.py AOTInductorTestABICompatibleGpu.test_reuse_kernel_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_aot_inductor.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,908
|
DISABLED test_mixed_mm (__main__.TestPatternMatcher)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mixed_mm&suite=TestPatternMatcher&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845054943).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mixed_mm`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 346, in test_mixed_mm
self._test_mixed_impl(fn, args, True, False)
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 316, in _test_mixed_impl
self.assertEqual("mixed_mm" in code, mixed_mm_expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Booleans mismatch: False is not True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_pattern_matcher.py TestPatternMatcher.test_mixed_mm
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_pattern_matcher.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,867
|
DISABLED test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845055086).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 266, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,833
|
DISABLED test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845055086).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 266, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,778
|
DISABLED test_slice_scatter_reinplace_cuda (__main__.GPUTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 66
|
NONE
|
Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_slice_scatter_reinplace_cuda&suite=GPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845342970).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 12 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_slice_scatter_reinplace_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 7999, in test_slice_scatter_reinplace
assertGeneratedKernelCountEqual(self, 1)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 727, in assertGeneratedKernelCountEqual
self.assertEqual(torch._inductor.metrics.generated_kernel_count, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 1 but got 2.
Absolute difference: 1
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor.py GPUTests.test_slice_scatter_reinplace_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,728
|
DISABLED test_sdpa_rewriter_12_cuda (__main__.SDPAPatternRewriterCudaDynamicTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sdpa_rewriter_12_cuda&suite=SDPAPatternRewriterCudaDynamicTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35844263142).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 9 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sdpa_rewriter_12_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 612, in _test_sdpa_rewriter_12
self._check_common(dot_prod_attention, contains=False, has_dropout=True)
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 85, in _check_common
self.assertGreaterEqual(counters["inductor"]["fuse_attention"], 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1250, in assertGreaterEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 0 not greater than or equal to 1
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_fused_attention.py SDPAPatternRewriterCudaDynamicTests.test_sdpa_rewriter_12_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_fused_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,683
|
DISABLED test_sdpa_rewriter_12_cuda (__main__.SDPAPatternRewriterCudaTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sdpa_rewriter_12_cuda&suite=SDPAPatternRewriterCudaTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845055086).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sdpa_rewriter_12_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 612, in _test_sdpa_rewriter_12
self._check_common(dot_prod_attention, contains=False, has_dropout=True)
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 85, in _check_common
self.assertGreaterEqual(counters["inductor"]["fuse_attention"], 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1250, in assertGreaterEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 0 not greater than or equal to 1
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_fused_attention.py SDPAPatternRewriterCudaTests.test_sdpa_rewriter_12_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_fused_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,642
|
DISABLED test_mm_concat_cuda (__main__.FreezingGpuTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mm_concat_cuda&suite=FreezingGpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35843835162).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 9 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mm_concat_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_inductor_freezing.py", line 336, in test_mm_concat
).run(code[0])
RuntimeError: Expected to not find "triton.jit" but found it
min_elem_per_thread=0
)
@triton.jit
~~~~~~~~~~ <--- HERE
def triton_poi_fused_mm_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 144
From CHECK-NOT: triton.jit
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_inductor_freezing.py FreezingGpuTests.test_mm_concat_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_inductor_freezing.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,641
|
DISABLED test_mm_concat_cuda (__main__.FreezingGpuTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mm_concat_cuda&suite=FreezingGpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845055018).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mm_concat_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_inductor_freezing.py", line 355, in test_mm_concat
).run(code[0])
RuntimeError: Expected to not find "triton.jit" but found it
min_elem_per_thread=0
)
@triton.jit
~~~~~~~~~~ <--- HERE
def triton_poi_fused_mm_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 144
From CHECK-NOT: triton.jit
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_inductor_freezing.py FreezingGpuTests.test_mm_concat_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_inductor_freezing.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,309
|
DISABLED test_aoti_eager_cache_hit_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_cache_hit_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845054943).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_cache_hit_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1093, in test_aoti_eager_cache_hit
res_value = getattr(torch.ops.aten, op_name)(input_tensor)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: aot_compile_function.ptr() != nullptr && aot_compile_function.ptr() != Py_None INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/torch/csrc/inductor/aoti_eager/kernel_holder.cpp":507, please report a bug to PyTorch. Failed to import - torch._inductor.aoti_eager.aoti_compile_with_persistent_cache
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_aoti_eager_cache_hit_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,278
|
DISABLED test_reorder_peak_memory_dfs (__main__.TestOperatorReorderForPeakMemory)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reorder_peak_memory_dfs&suite=TestOperatorReorderForPeakMemory&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845054777).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reorder_peak_memory_dfs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_memory.py", line 200, in test_reorder_peak_memory_dfs
.run(code)
RuntimeError: Expected to find "buf3 = " but did not find it
Searched string:
stream0 = get_raw_stream(0)
triton_red_fused_sum_2.run(buf4, buf6, 1, 2048, grid=grid(1), stream=stream0)
buf1 = buf4; del buf4 # reuse
# Topologically Sorted Source Nodes: [t2], Original ATen: [aten.mm]
extern_kernels.mm(primals_2, primals_3, out=buf1)
del primals_3
buf5 = empty_strided_cuda((2048, 10), (10, 1), torch.float32)
# Topologically Sorted Source Nodes: [t4], Original ATen: [aten.mm]
extern_kernels.mm(buf1, primals_5, out=buf5)
buf7 = empty_strided_cuda((3, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [sum_2], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_3.run(buf5, buf7, 3, 6827, grid=grid(3), stream=stream0)
del buf5
buf9 = buf6; del buf6 # reuse
# Topologically Sorted Source Nodes: [sum_2, add], Original ATen: [aten.sum, aten.add]
stream0 = get_raw_stream(0)
triton_per_fused_add_sum_4.run(buf9, buf7, 1, 3, grid=grid(1), stream=stream0)
del buf7
return (buf9, primals_2, reinterpret_tensor(buf1, (1, 2048), (1, 1), 0), reinterpret_tensor(primals_5, (10, 1), (1, 10), 0), reinterpret_tensor(buf0, (10, 2048), (1, 10), 0), reinterpret_tensor(primals_4, (1, 10), (1, 1), 0), )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
primals_1 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
primals_2 = rand_strided((2048, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_3 = rand_strided((1, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_4 = rand_strided((10, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_5 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
fn = lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: buf3 =
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_memory.py TestOperatorReorderForPeakMemory.test_reorder_peak_memory_dfs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_memory.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,264
|
DISABLED test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845021511).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 14 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 266, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,945,259
|
DISABLED test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_True (__main__.TestFxGraphCache)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845055086).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 146, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 14 but got 35.
Absolute difference: 21
Relative difference: 1.5
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,896,021
|
Added torch check to ensure indices are not empty
|
abcarlisle
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fixes #142459
cc @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,797,788,860
|
[scan] scan dim handling in user-facing scan()
|
bohnstingl
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 11
|
COLLABORATOR
|
This PR introduces the capability that the scan dim is handled in the user facing scan() call. Internally, the scan dim is always shifted to dim 0 and then the scan is performed over that dim.
This is a follow-up PR from https://github.com/bohnstingl/pytorch/pull/3
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ydwu4
| true
|
2,797,774,272
|
PEP585 update - mostly toplevels
|
aorenste
|
closed
|
[
"oncall: jit",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"release notes: jit",
"topic: not user facing",
"suppress-bc-linter"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145178
See #145101 for details.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mcarilli @ptrblck @leslie-fang-intel
| true
|
2,797,773,847
|
PEP585 update - .ci android aten
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: releng",
"topic: not user facing",
"suppress-bc-linter"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145177
See #145101 for details.
| true
|
2,797,773,687
|
PEP585 update - test
|
aorenste
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145176
See #145101 for details.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,797,772,975
|
PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu
|
aorenste
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"suppress-bc-linter",
"release notes: optim",
"ci-no-td"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145175
See #145101 for details.
| true
|
2,797,772,502
|
PEP585 update - torch/onnx
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing",
"fx"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145174
See #145101 for details.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,797,730,083
|
[BE]: Improve typing for torch/fx/_pytree.py and torch/utils/_pytree.py
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 3
|
COLLABORATOR
|
Improve type inference in _pytree.py utility functions
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,797,715,500
|
[BE]: Update CUTLASS submodule to 3.7.0
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 17
|
COLLABORATOR
|
* This has a couple of new features, but mostly has a lot of bugfixes for the prior releases
* This is the last Hopper-focused release of CUTLASS before blackwell drops, so let's upgrade to it.
* Most of the remaining diff noise is copyright year updates on the CUTLASS submodule
| true
|
2,797,658,331
|
torch/_prims/executor.py #TODO : caching
|
Andrwaa
|
closed
|
[] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
I'm working on #TODO : caching in torch/_prims/executor.py, and this is my idea to implement that functionality:
```
from typing import Any, Callable, Optional, TypeVar
from typing_extensions import ParamSpec, TypeVarTuple, Unpack
import hashlib
import pickle
import inspect
from torch._prims.context import TorchRefsMode
from torch.fx import GraphModule
from torch.fx.experimental.proxy_tensor import make_fx, wrapper_and_args_for_make_fx
T = TypeVar("T")
P = ParamSpec("P")
Ts = TypeVarTuple("Ts")
def execute(
gm: GraphModule,
*args: Unpack[Ts],
executor: str = "aten",
executor_parameters: Optional[dict] = None,
) -> Any:
"""
Prototype ATen executor.
Just executes the context's graph.
"""
if executor == "aten":
return gm.forward(*args)
msg = f"Received unexpected value for 'executor': {executor}. Allowed values are: aten."
raise ValueError(msg)
def compute_cache_key(fn: Callable, args: tuple, kwargs: dict) -> str:
"""
Compute a unique key for the function and its parameters (args, kwargs).
The key is based on the function's source code and serialized arguments.
"""
fn_code = pickle.dumps(inspect.getsource(fn).encode("utf-8"))
args_data = pickle.dumps((args, kwargs))
return hashlib.sha256(fn_code + args_data).hexdigest()
_cache = {}
def make_traced(fn: Callable[P, T]) -> Callable[P, T]:
"""
Returns a tracked function that uses caching for reuse
the graphs already drawn previously.
"""
def _traced(*args: P.args, **kwargs: P.kwargs) -> T:
executor = str(kwargs.pop("executor", "aten"))
cache_key = compute_cache_key(fn, args, kwargs)
if cache_key in _cache:
gm = _cache[cache_key]
else:
wrapped, all_args = wrapper_and_args_for_make_fx(fn, args, kwargs)
with TorchRefsMode():
gm = make_fx(wrapped)(all_args)
_cache[cache_key] = gm
return execute(gm, *args, executor=executor)
return _traced
```
My doubt is whether pickle also works with complex types, like tensors, to generate a unique key to perform the storage.
| true
|
2,797,580,890
|
CUDA initialization error with vLLM 0.5.4 and PyTorch 2.4.0+cu121
|
TaoShuchang
|
open
|
[
"oncall: distributed"
] | 0
|
NONE
|
### 🐛 Describe the bug
CUDA initialization error in forked subprocesses when using **vLLM 0.5.4 with PyTorch 2.4.0+cu121**. The same code works with vLLM 0.5.0 and PyTorch 2.3.0+cu121, but fails with newer versions (vLLM 0.6.2 with PyTorch 2.5.1+cu121).
**Error Message:**
```
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
```
**Environment:**
- vLLM: 0.5.4 (Also tested with 0.6.2)
- PyTorch: 2.4.0+cu121 (Also tested with 2.5.1+cu121)
- CUDA: 12.2
- GPU Driver: 535.54.03
- OS: [Add your operating system here]
**Steps to Reproduce:**
1. Install vLLM 0.5.4 and PyTorch 2.4.0+cu121
2. Set the following environment variables:
```bash
export PYTHONMULTIPROCESSING_START_METHOD=spawn
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
export VLLM_USE_SPAWN=1
export PYTHONPATH="${PYTHONPATH}:/mnt/data/taoshuchang.tsc/IR_RAG/IRM"
export MASTER_PORT=29500
export MASTER_ADDR=localhost
export WORLD_SIZE=$(echo $CUDA_VISIBLE_DEVICES | tr ',' '\n' | wc -l)
export CUDA_DEVICE_ORDER=PCI_BUS_ID
```
3. Run a script that uses vLLM to load a large language model (e.g., LLaMA 3.3-70B)
4. Observe the CUDA initialization error in forked subprocesses
**Additional Context:**
- This issue doesn't occur with vLLM 0.5.0 and PyTorch 2.3.0+cu121
- vLLM 0.5.4 is required to load LLaMA 3.3, which necessitates PyTorch 2.4.0+cu121
**Questions:**
1. Is this a known issue with PyTorch 2.4.0+cu121 and newer versions?
2. Are there any workarounds or configurations to resolve this issue?
3. Is there a compatibility matrix for vLLM, PyTorch, and CUDA versions?
**Attempted Solutions:**
- Tried vLLM 0.6.2 with PyTorch 2.5.1+cu121, but encountered the same error
- Set `PYTHONMULTIPROCESSING_START_METHOD=spawn` as suggested in error message
**Full error trace:**
```
INFO 01-19 03:55:21 config.py:899] Defaulting to use mp for distributed inference
INFO 01-19 03:55:21 llm_engine.py:226] Initializing an LLM engine (v0.6.1.dev238+ge2c6e0a82) with config: model='/mnt/data/taoshuchang.tsc/IR_RAG/ckpt/hotpot_contriever/analyze_merge//hotpot_1doc_other_Meta-Llama-3-70B-Instruct_lr1e5', speculative_config=None, tokenizer='/mnt/data/taoshuchang.tsc/IR_RAG/ckpt/hotpot_contriever/analyze_merge//hotpot_1doc_other_Meta-Llama-3-70B-Instruct_lr1e5', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=8, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=/mnt/data/taoshuchang.tsc/IR_RAG/ckpt/hotpot_contriever/analyze_merge//hotpot_1doc_other_Meta-Llama-3-70B-Instruct_lr1e5, use_v2_block_manager=False, num_scheduler_steps=1, multi_step_stream_outputs=False, enable_prefix_caching=False, use_async_output_proc=True, use_cached_outputs=False, mm_processor_kwargs=None)
WARNING 01-19 03:55:22 multiproc_gpu_executor.py:53] Reducing Torch parallelism from 32 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 01-19 03:55:22 custom_cache_manager.py:17] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager
[1;36m(VllmWorkerProcess pid=305)[0;0m INFO 01-19 03:55:24 multiproc_worker_utils.py:218] Worker ready; awaiting tasks
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] Exception in worker VllmWorkerProcess while processing method init_device: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method, Traceback (most recent call last):
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] output = executor(*args, **kwargs)
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] ^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/worker/worker.py", line 166, in init_device
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch.cuda.set_device(self.device)
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 478, in set_device
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch._C._cuda_setDevice(device)
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 305, in _lazy_init
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] raise RuntimeError(
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
[1;36m(VllmWorkerProcess pid=305)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233]
[1;36m(VllmWorkerProcess pid=307)[0;0m INFO 01-19 03:55:24 multiproc_worker_utils.py:218] Worker ready; awaiting tasks
[1;36m(VllmWorkerProcess pid=308)[0;0m INFO 01-19 03:55:24 multiproc_worker_utils.py:218] Worker ready; awaiting tasks
[1;36m(VllmWorkerProcess pid=306)[0;0m INFO 01-19 03:55:24 multiproc_worker_utils.py:218] Worker ready; awaiting tasks
[1;36m(VllmWorkerProcess pid=303)[0;0m INFO 01-19 03:55:24 multiproc_worker_utils.py:218] Worker ready; awaiting tasks
[1;36m(VllmWorkerProcess pid=304)[0;0m INFO 01-19 03:55:24 multiproc_worker_utils.py:218] Worker ready; awaiting tasks
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] Exception in worker VllmWorkerProcess while processing method init_device: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method, Traceback (most recent call last):
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] output = executor(*args, **kwargs)
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] ^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/worker/worker.py", line 166, in init_device
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch.cuda.set_device(self.device)
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 478, in set_device
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch._C._cuda_setDevice(device)
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 305, in _lazy_init
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] raise RuntimeError(
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
[1;36m(VllmWorkerProcess pid=307)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233]
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] Exception in worker VllmWorkerProcess while processing method init_device: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method, Traceback (most recent call last):
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] output = executor(*args, **kwargs)
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] ^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/worker/worker.py", line 166, in init_device
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch.cuda.set_device(self.device)
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 478, in set_device
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch._C._cuda_setDevice(device)
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 305, in _lazy_init
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] raise RuntimeError(
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
[1;36m(VllmWorkerProcess pid=308)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233]
[1;36m(VllmWorkerProcess pid=303)[0;0m [1;36m(VllmWorkerProcess pid=306)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] Exception in worker VllmWorkerProcess while processing method init_device: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method, Traceback (most recent call last):
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] Exception in worker VllmWorkerProcess while processing method init_device: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method, Traceback (most recent call last):
[1;36m(VllmWorkerProcess pid=303)[0;0m [1;36m(VllmWorkerProcess pid=306)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process
[1;36m(VllmWorkerProcess pid=303)[0;0m [1;36m(VllmWorkerProcess pid=306)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] output = executor(*args, **kwargs)
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] output = executor(*args, **kwargs)
[1;36m(VllmWorkerProcess pid=303)[0;0m [1;36m(VllmWorkerProcess pid=306)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] ^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorkerProcess pid=303)[0;0m [1;36m(VllmWorkerProcess pid=306)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/worker/worker.py", line 166, in init_device
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/worker/worker.py", line 166, in init_device
[1;36m(VllmWorkerProcess pid=303)[0;0m [1;36m(VllmWorkerProcess pid=306)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch.cuda.set_device(self.device)
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch.cuda.set_device(self.device)
[1;36m(VllmWorkerProcess pid=303)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 478, in set_device
[1;36m(VllmWorkerProcess pid=306)[0;0m [1;36m(VllmWorkerProcess pid=303)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 478, in set_device
[1;36m(VllmWorkerProcess pid=306)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch._C._cuda_setDevice(device)
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch._C._cuda_setDevice(device)
[1;36m(VllmWorkerProcess pid=303)[0;0m [1;36m(VllmWorkerProcess pid=306)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 305, in _lazy_init
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 305, in _lazy_init
[1;36m(VllmWorkerProcess pid=303)[0;0m [1;36m(VllmWorkerProcess pid=306)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] raise RuntimeError(
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] raise RuntimeError(
[1;36m(VllmWorkerProcess pid=306)[0;0m [1;36m(VllmWorkerProcess pid=303)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
[1;36m(VllmWorkerProcess pid=306)[0;0m [1;36m(VllmWorkerProcess pid=303)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233]
ERROR 01-19 03:55:24 multiproc_worker_utils.py:233]
[1;36m(VllmWorkerProcess pid=302)[0;0m INFO 01-19 03:55:24 multiproc_worker_utils.py:218] Worker ready; awaiting tasks
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] Exception in worker VllmWorkerProcess while processing method init_device: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method, Traceback (most recent call last):
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] output = executor(*args, **kwargs)
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] ^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/worker/worker.py", line 166, in init_device
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch.cuda.set_device(self.device)
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 478, in set_device
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch._C._cuda_setDevice(device)
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 305, in _lazy_init
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] raise RuntimeError(
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
[1;36m(VllmWorkerProcess pid=304)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233]
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] Exception in worker VllmWorkerProcess while processing method init_device: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method, Traceback (most recent call last):
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] output = executor(*args, **kwargs)
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] ^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/worker/worker.py", line 166, in init_device
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch.cuda.set_device(self.device)
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 478, in set_device
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] torch._C._cuda_setDevice(device)
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/cuda/__init__.py", line 305, in _lazy_init
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] raise RuntimeError(
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233] RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
[1;36m(VllmWorkerProcess pid=302)[0;0m ERROR 01-19 03:55:24 multiproc_worker_utils.py:233]
ERROR 01-19 04:05:25 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 305 died, exit code: -15
INFO 01-19 04:05:25 multiproc_worker_utils.py:124] Killing local vLLM worker processes
/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/_distutils_hack/__init__.py:54: UserWarning: Reliance on distutils from stdlib is deprecated. Users must rely on setuptools to provide the distutils module. Avoid importing distutils or import setuptools first, and avoid setting SETUPTOOLS_USE_DISTUTILS=stdlib. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml
warnings.warn(
Traceback (most recent call last):
File "/mnt/data/taoshuchang.tsc/IR_RAG/IRM/inference_new.py", line 299, in <module>
main()
File "/mnt/data/taoshuchang.tsc/IR_RAG/IRM/inference_new.py", line 291, in main
evaluate_retrieval(
File "/mnt/data/taoshuchang.tsc/IR_RAG/IRM/inference_new.py", line 236, in evaluate_retrieval
llm = load_model(model_path, enable_lora=enable_lora)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/taoshuchang.tsc/IR_RAG/IRM/inference_new.py", line 49, in load_model
llm = LLM(model=model_path, tensor_parallel_size=torch.cuda.device_count(), enable_lora=enable_lora)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 214, in __init__
self.llm_engine = LLMEngine.from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 564, in from_engine_args
engine = cls(
^^^^
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 325, in __init__
self.model_executor = executor_class(
^^^^^^^^^^^^^^^
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__
super().__init__(*args, **kwargs)
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 47, in __init__
self._init_executor()
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/multiproc_gpu_executor.py", line 110, in _init_executor
self._run_workers("init_device")
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/executor/multiproc_gpu_executor.py", line 185, in _run_workers
driver_worker_output = driver_worker_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/worker/worker.py", line 176, in init_device
init_worker_distributed_environment(self.parallel_config, self.rank,
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/worker/worker.py", line 448, in init_worker_distributed_environment
init_distributed_environment(parallel_config.world_size, rank,
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 946, in init_distributed_environment
torch.distributed.init_process_group(
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 83, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 97, in wrapper
func_return = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 1520, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/distributed/rendezvous.py", line 221, in _tcp_rendezvous_handler
store = _create_c10d_store(
^^^^^^^^^^^^^^^^^^^
File "/mnt/data/taoshuchang.tsc/anaconda3/envs/py311llama33/lib/python3.11/site-packages/torch/distributed/rendezvous.py", line 189, in _create_c10d_store
return TCPStore(
^^^^^^^^^
torch.distributed.DistStoreError: Timed out after 601 seconds waiting for clients. 1/8 clients joined.
ERROR conda.cli.main_run:execute(49): `conda run python -u /mnt/data/taoshuchang.tsc/IR_RAG/IRM/inference_new.py --test_data_path /mnt/data/taoshuchang.tsc/IR_RAG/IRM/datasets/hotpot_contriever/hotpot-dev.json --batch_size 4 --model_path /mnt/data/taoshuchang.tsc/IR_RAG/ckpt/hotpot_contriever/analyze_merge//hotpot_1doc_other_Meta-Llama-3-70B-Instruct_lr1e5 --result_save_path /mnt/data/taoshuchang.tsc/IR_RAG/IRM/result/hotpot_contriever/analyze/hotpot_1doc_other_Meta-Llama-3-70B-Instruct_lr1e5.json --batch True` failed. (See above for error)
```
Thank you for your help in resolving this issue.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,797,530,173
|
Added weight to MSELoss Criterion
|
JacobGlennAyers
|
closed
|
[
"triaged",
"open source",
"Stale",
"release notes: nn",
"topic: improvements"
] | 4
|
NONE
|
- Changed Inheritance of MSELoss from _Loss to _WeightedLoss
- Modified MSELoss to include weight parameter
- Removed TODO
- Added weight documentation to MSELoss Class
topic: enhancement
release notes: nn
I couldn't find this in any issues or under any existing PR Requests, I only found it by finding the TODO in the loss.py file.
Edit - Accidental Markdown all caps removed
| true
|
2,797,476,080
|
empty_cache does not work for CUDAPluggableAllocator + MemPool
|
youkaichao
|
open
|
[
"module: cuda",
"triaged"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
I'm trying to use `CUDAPluggableAllocator`, following https://pytorch.org/docs/stable/notes/cuda.html#using-custom-memory-allocators-for-cuda . However, it has a critical limitation, that `torch.cuda.memory.change_current_allocator` needs to be called before any allocation, and we cannot switch the allocator.
Following @syed-ahmed 's suggestion, I'm trying to use `CUDAPluggableAllocator` with `MemPool`, and it seems to work, in the sense that I can switch between allocators. However, I find that, in this way, the pool never returns memory to the underlying allocator.
Here is a simple demonstration code snippet:
```python
import torch
import torch.utils.cpp_extension
cpp_sources = """
// save as alloc.cc
// compile with g++ alloc.cc -o alloc.so -I/usr/local/cuda/include -shared -fPIC
#include <sys/types.h>
#include <cuda_runtime_api.h>
#include <iostream>
// Compile with g++ alloc.cc -o alloc.so -I/usr/local/cuda/include -shared -fPIC
extern "C" {
void* my_malloc(ssize_t size, int device, cudaStream_t stream) {
void *ptr;
cudaMalloc(&ptr, size);
std::cout<<"C side: alloc "<<ptr<< " " <<size<<std::endl;
return ptr;
}
void my_free(void* ptr, ssize_t size, int device, cudaStream_t stream) {
std::cout<<"C side: free "<<ptr<< " "<<size<<std::endl;
cudaFree(ptr);
}
// hack: add this placeholder function to let PyTorch generate module extension template
at::Tensor sin_add(at::Tensor x, at::Tensor y) {
return x.sin() + y.sin();
}
}
"""
module = torch.utils.cpp_extension.load_inline("alloc", cpp_sources, with_cuda=True, functions=['sin_add'])
so_file = module.__file__
def f():
new_alloc = torch.cuda.memory.CUDAPluggableAllocator(
so_file, 'my_malloc', 'my_free')
with torch.cuda.use_mem_pool(torch.cuda.MemPool(new_alloc._allocator)):
for factor in (1024, 1024 ** 2):
print(f"Allocate {60 * factor} bytes of memory on the GPU from Python")
data = torch.empty((60, factor), dtype=torch.uint8, device="cuda")
print(f"Free {60 * factor} bytes of memory on the GPU from Python")
del data
print("Python side: memory is released")
print(f"Allocate {70 * factor} bytes of memory on the GPU from Python")
data = torch.empty((70, factor), dtype=torch.uint8, device="cuda")
print(f"Free {70 * factor} bytes of memory on the GPU from Python")
del data
print("Python side: memory is released")
# torch.cuda.empty_cache() here will error: RuntimeError: captures_underway.empty() INTERNAL ASSERT FAILED at "../c10/cuda/CUDACachingAllocator.cpp":2967, please report a bug to PyTorch.
# torch.cuda.empty_cache() here does not take effect.
f()
import gc
gc.collect()
```
Running the code, we can see that `C side: alloc ` is called properly. However, `C side: free ` is never called.
In addition, if I call `torch.cuda.empty_cache()` inside `with torch.cuda.use_mem_pool`, it will trigger an assertion error.
Ultimately, my goal is to switch between `CUDAPluggableAllocator` and the default allocator, and also `empty_cache` for the `CUDAPluggableAllocator`.
### Versions
PyTorch 2.5.1+cu124
cc @ptrblck @msaroufim @eqy
| true
|
2,797,406,816
|
[BE]: Update NCCL submodule to 2.24.3
|
tmm1
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Update NCCL to the latest version
Last bump was in https://github.com/pytorch/pytorch/pull/124014
See upstream release notes here: https://docs.nvidia.com/deeplearning/nccl/release-notes/rel_2-24-3.html#rel_2-24-3
cc @Skylion007
| true
|
2,797,269,721
|
PEP585 update - torch/fx
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145166
See #145101 for details.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,797,269,550
|
PEP585 update - torch/export
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: export",
"suppress-bc-linter"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145165
See #145101 for details.
| true
|
2,797,269,212
|
PEP585 update - torch/distributed
|
aorenste
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (sharded)",
"topic: not user facing",
"suppress-bc-linter",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145164
See #145101 for details.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,797,269,037
|
PEP585 update - torch/distributed/elastic torch/distributed/checkpoint
|
aorenste
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"suppress-bc-linter",
"release notes: distributed (torchelastic)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145163
See #145101 for details.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,797,267,366
|
PEP585 update - torch/distributed/fsdp
|
aorenste
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"topic: not user facing",
"ciflow/inductor",
"suppress-bc-linter"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145162
See #145101 for details.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,797,260,843
|
[mps/inductor] Introduce a metal approx for erf() and use it.
|
dcci
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
MEMBER
|
Probably we can do better, but this is a start.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,221,337
|
[MPSInductor] Add `TrueDiv` and `Round[Int|Decimal]`
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145156
* __->__ #145160
That fixes `test_builtins_round_float_ndigits_neg` and `test_builtins_round`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,219,150
|
Enable bfloat16 testing on MacOS14+
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145160
* #145156
* __->__ #145159
* #145157
As Metal-3.1 supports this dtype
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,215,927
|
Pytorch matmul for nested 4D tensors in jagged layout doesn't work
|
GabMartino
|
open
|
[
"triaged",
"module: nestedtensor"
] | 8
|
NONE
|
### 🐛 Describe the bug
Why this code doesn't work, even though is suggested to use the jagged layout:
```python
x = torch.nested.nested_tensor([torch.randn(4, 100, 16),
torch.randn(4, 150, 16)], layout=torch.jagged)
y = torch.nested.nested_tensor([torch.randn(4, 16, 100),
torch.randn(4, 16, 150)], layout=torch.jagged)
v = torch.matmul(x, y)
```
reporting this error:
```bash
RuntimeError: matmul(): not supported between inputs of shapes (2, 4, j1, 16) and torch.Size([2, 4, 16, j2])
```
Instead with the strided layout works perfectly?
The "j1" and "j2" suggests a wrong arrangement of the tensors?
Thank you to everyone!
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.9.21 (main, Dec 4 2024, 08:53:34) [GCC 13.2.0] (64-bit runtime)
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.6.0+cu124
[pip3] torchmetrics==1.6.1
[pip3] triton==3.2.0
[conda] Could not collect
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,797,171,719
|
[MPSInductor][BE] NaN-propagating min/max to header
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145160
* #145156
* #145159
* __->__ #145157
May be to be later reused from eager op as well
Also, didn't know that Metal already have type_traits
And use `metal::isunorderder(a, b)` instead of `metal::isnan(a + b)` is it is defined as function that is equivalent `a != a || b != b`, but I suspect it might have a best native implementation for the specific architecture
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,171,696
|
Make `inductor_utils.requires_gpu` accept MPS
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145156
Not yet ready to setp HAS_GPU to true, but can unskip tests that require GPU
(Noticed while running test_mps_basics.py that `test_scalar_cpu_tensor_arg` is getting skipped)
- Replace `GPU_TYPE` with `self.device` in `test_custom_op_fixed_layout_sequential`, `test_inductor_layout_optimization_input_mutations`, `test_mutable_custom_op_fixed_layout2` otherwise they GPU tests are just running for _cpu suffixes.
- Tweak `test_tmp_not_defined_issue3` to work correctly on CPU, by defining `test_device` and `test_device_0`
- UnXFail `test_mutable_custom_op_fixed_layout2_dynamic_shapes` as it should just work on CPU
- Add `skip_if_no_triton` decorator and decorate `test_reduction_config_limit` with it, as it does not need CPU nor GPU, but rather a triton backend.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,123,821
|
The latest PyTorch XPU wheel 2.7.0.dev20250117+xpu does not work on Windows
|
pbchekin
|
closed
|
[
"module: binaries",
"module: windows",
"triaged",
"module: xpu"
] | 10
|
NONE
|
Steps:
```
# This installs 2.7.0.dev20250117+xpu
pip install torch --index-url https://download.pytorch.org/whl/nightly/xpu
python -c 'import torch;print(torch.__version__)'
```
Result:
```
OSError: [WinError 126] The specified module could not be found. Error loading "C:\.venv\lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
```
The last known wheel that worked is `2.7.0.dev20250110+xpu`:
```
# This installs 2.7.0.dev20250110+xpu
pip install torch --index-url https://download.pytorch.org/whl/nightly/xpu
python -c 'import torch;print(torch.__version__)'
```
Result:
```
2.7.0.dev20250110+xpu
```
cc @seemethere @malfet @osalpekar @atalman @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,797,071,173
|
Tweak schema_check to handle annotated builtin types
|
aorenste
|
closed
|
[
"Merged",
"ciflow/inductor",
"release notes: export"
] | 1
|
CONTRIBUTOR
|
As of python 3.9 annotated lists can be written as `list[T]` and `List[T]` has been deprecated. However schema_check was converting `list[T]` to simply be `list`. This change teaches it to handle `list[T]` the same as `List[T]`.
A couple small drive-by changes I noticed as well:
- Path concatenation should use `os.path.join`, not `+`
- Spelling in error message
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145138
* __->__ #145154
| true
|
2,797,067,669
|
[BE]: Apply ruff PERF401 to torch
|
Skylion007
|
open
|
[
"oncall: distributed",
"oncall: jit",
"open source",
"better-engineering",
"ciflow/trunk",
"release notes: quantization",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 12
|
COLLABORATOR
|
Applies PERF401 optimizations to torch.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,060,620
|
[BE]: Simplify set add with set update
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"Reverted",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export",
"ci-no-td"
] | 13
|
COLLABORATOR
|
Simplifies the set update slightly to be more readable and efficient.
| true
|
2,797,056,331
|
Driver Allocated Memory grows unrestricted when using torch.unique on MPS device
|
BjoernBiltzinger
|
closed
|
[
"module: memory usage",
"triaged",
"module: mps"
] | 2
|
NONE
|
### 🐛 Describe the bug
When using `torch.unique` in a loop on the MPS backend, the memory allocated by the driver grows unrestricted. In my real application that leads to an `RuntimeError: MPS backend out of memory (MPS allocated: 24.00 MB, other allocations: 36.24 GB, max allowed: 36.27 GB)` error late in the training.
I created this minimal example with the same behaviour.
```python
import torch
import gc
def test_operations(iterations: int, shape: tuple[int, int]) -> None:
print(f"PyTorch version: {torch.__version__}")
# Test 1: torch.unique
print("\nTest 1: torch.unique")
x = torch.randint(0, 2, shape, device="mps")
for i in range(iterations):
y = torch.unique(x)
del y
# Empty cache and collect garbage to make sure
torch.mps.empty_cache()
gc.collect()
if i % 10 == 0:
print(
f"Iter {i}: Driver Allocated Memory: {torch.mps.driver_allocated_memory() / (1024**2):.2f}MB, Current Allocated Memory: {torch.mps.current_allocated_memory() / (1024**2):.2f}MB"
)
# Test 2: torch.sort (comparison)
print("\nTest 2: torch.sort")
for i in range(iterations):
y = torch.sort(x)[0]
del y
# Empty cache and collect garbage to make sure
torch.mps.empty_cache()
gc.collect()
if i % 10 == 0:
print(
f"Iter {i}: Driver memory: {torch.mps.driver_allocated_memory() / (1024**2):.2f}MB, Current memory: {torch.mps.current_allocated_memory() / (1024**2):.2f}MB"
)
test_operations(iterations=100, shape=(2000, 10))
```
Results in
```
PyTorch version: 2.5.1
Test 1: torch.unique
Iter 0: Driver Allocated Memory: 18.73MB, Current Allocated Memory: 0.15MB
Iter 10: Driver Allocated Memory: 98.73MB, Current Allocated Memory: 0.15MB
Iter 20: Driver Allocated Memory: 178.73MB, Current Allocated Memory: 0.15MB
Iter 30: Driver Allocated Memory: 258.73MB, Current Allocated Memory: 0.15MB
Iter 40: Driver Allocated Memory: 338.73MB, Current Allocated Memory: 0.15MB
Iter 50: Driver Allocated Memory: 418.73MB, Current Allocated Memory: 0.15MB
Iter 60: Driver Allocated Memory: 578.73MB, Current Allocated Memory: 0.15MB
Iter 70: Driver Allocated Memory: 738.72MB, Current Allocated Memory: 0.15MB
Iter 80: Driver Allocated Memory: 898.72MB, Current Allocated Memory: 0.15MB
Iter 90: Driver Allocated Memory: 1058.72MB, Current Allocated Memory: 0.15MB
Test 2: torch.sort
Iter 0: Driver memory: 1202.72MB, Current memory: 0.15MB
Iter 10: Driver memory: 1202.72MB, Current memory: 0.15MB
Iter 20: Driver memory: 1202.72MB, Current memory: 0.15MB
Iter 30: Driver memory: 1202.72MB, Current memory: 0.15MB
Iter 40: Driver memory: 1202.72MB, Current memory: 0.15MB
Iter 50: Driver memory: 1202.72MB, Current memory: 0.15MB
Iter 60: Driver memory: 1202.72MB, Current memory: 0.15MB
Iter 70: Driver memory: 1202.72MB, Current memory: 0.15MB
Iter 80: Driver memory: 1202.72MB, Current memory: 0.15MB
Iter 90: Driver memory: 1202.72MB, Current memory: 0.15MB
```
Showing the increase in the driver allocated memory when using `torch.unique` but not when using another function like `torch.sort`. I just used `torch.sort` as comparison here.
Is this behaviour expected?
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.11 (main, Dec 3 2024, 17:20:40) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] torch==2.5.1
[conda] Could not collect
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,797,045,905
|
[inductor] Simplify _inductor/utils.py slightly
|
rec
|
closed
|
[
"oncall: distributed",
"module: rocm",
"open source",
"better-engineering",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145150
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,045,870
|
[inductor] Add type annotations to _inductor/utils.py
|
rec
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145150
* __->__ #145149
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,797,010,678
|
[BE][PYFMT] bump `ruff format` target version to py39: add parentheses around long `with`-statements
|
XuehaiPan
|
closed
|
[
"open source",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 10
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145606
* #144546
* #144569
* __->__ #145148
* #146509
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,796,955,392
|
[BE][Easy] increase pip timeout for nightly tool: 15s -> 60s
|
XuehaiPan
|
open
|
[
"open source",
"Stale",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145147
| true
|
2,796,951,812
|
improve perf for layer_norm
|
ywq880611
|
closed
|
[
"triaged",
"open source",
"Stale",
"release notes: cuda"
] | 2
|
CONTRIBUTOR
|
Fixes #145145
Please see more details in the issue.
| true
|
2,796,940,106
|
[RFC] Improve performance for layer_norm op for cuda with revectorized
|
ywq880611
|
open
|
[
"module: nn",
"module: cuda",
"triaged",
"topic: performance"
] | 4
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
I found there is big perf drop if the size of layer_norm's inside size is not multiple of 4, there is a micro test case:
```python
import torch
DEVICE=torch.device('cuda')
# Time cost for near 1024
for cnt in range(2040, 2050):
x = torch.randn(4096, cnt, device=DEVICE, dtype=torch.float32)
w_shape = (x.shape[-1], )
#warm up
need_warmup = True
round = 5
if need_warmup:
for _ in range(round):
output = torch.nn.functional.layer_norm(x, w_shape)
torch.cuda.synchronize()
start_time = torch.cuda.Event(enable_timing=True)
end_time = torch.cuda.Event(enable_timing=True)
# Start time
start_time.record()
# Apply layernorm
for _ in range(round):
output = torch.nn.functional.layer_norm(x, w_shape)
# End time
end_time.record()
torch.cuda.synchronize()
# Calculate elapsed time
elapsed_time_ms = start_time.elapsed_time(end_time)
# print(f"CUDA Time: {elapsed_time_ms:.6f} ms")
gbps = lambda ms: round * 2 * x.numel() * x.element_size() * 1e-9 / (ms * 1e-3)
print(f"n as {cnt} of softmax: {gbps(elapsed_time_ms):.6f} gb/s")
```
Its output is:
```
n as 2040 of softmax: 483.555543 gb/s
n as 2041 of softmax: 345.531858 gb/s
n as 2042 of softmax: 345.369984 gb/s
n as 2043 of softmax: 347.825623 gb/s
n as 2044 of softmax: 489.580822 gb/s
n as 2045 of softmax: 345.591973 gb/s
n as 2046 of softmax: 346.057951 gb/s
n as 2047 of softmax: 345.850055 gb/s
n as 2048 of softmax: 470.192376 gb/s
n as 2049 of softmax: 347.012446 gb/s
```
We could see for the perf for input with `N = 2040, 2044, 2048` is obvious greater than other input `(480 vs 340)`.
Therefore, what I would like to do is **mitigating the perf gap between these input**.
### Alternatives
The root cause is that there is two kernels for `layer_norm`:
https://github.com/pytorch/pytorch/blob/5e4cf3e6ad6f1f06436f409b394ae02e5ed5583d/aten/src/ATen/native/cuda/layer_norm_kernel.cu#L824-L837
We could see if `N` is multiple of `num_vec_elems (4)`, it will call a kernel called `launch_vectorized_layer_norm_kernel`, which could load vectorized elements.
So what we could do is to also enable vectorized elements load for those case whose `N` is not multiple for `num_vec_elems (4)`.
I tried a draft implement to achieve it, it could improve performance same as the vectorized case, for this case it's about **~40%**, `(480 vs 340) take 2042 as example`.
Optimized data:
```
n as 2040 of softmax: 459.758796 gb/s
n as 2041 of softmax: 497.804886 gb/s
n as 2042 of softmax: 479.061575 gb/s
n as 2043 of softmax: 477.197096 gb/s
n as 2044 of softmax: 473.285078 gb/s
n as 2045 of softmax: 473.795206 gb/s
n as 2046 of softmax: 495.999985 gb/s
n as 2047 of softmax: 499.649134 gb/s
n as 2048 of softmax: 455.111104 gb/s
n as 2049 of softmax: 473.928423 gb/s
```
### Additional context
Here is a [doc](https://docs.google.com/document/d/1TFIbJAO3tek1-EltVvMC_0TgYuEeBZw8rxDrSb_Wx8Q/edit?tab=t.0) contains some stuffs about it.
And there is another optimization, we could see there is still perf gap for `layer_norm` in `pytorch` and `triton`, we may mitigate the gap by using **register to cache the data in pytorch kernel**, because the `launch_vectorized_layer_norm_kernel` load data from gmem twice yet, but I guess the `N` for `layer_norm` op may usually be a very big number (> 10k for some 2d layer_norm), so it may introduce much register pressure, WDYT?
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy
| true
|
2,796,928,799
|
Please add fp16 to MPS devices.
|
AimoneAndex
|
open
|
[
"needs reproduction",
"triaged",
"module: amp (automated mixed precision)",
"module: mps"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
I used torch==2.7 to train llama via huggingface transformers,but
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 9.00it/s]
trainable params: 4,194,304 || all params: 6,933,450,752 || trainable%: 0.0605
Traceback (most recent call last):
File "/Users/rbhan/Data/StellAIHub/Train/LanguageModel/xmt.py", line 115, in <module>
trainer = Trainer(
^^^^^^^^
File "/Users/rbhan/Data/AIHub/Trans-Penv/transformers/src/transformers/utils/deprecation.py", line 165, in wrapped_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/AIHub/Trans-Penv/transformers/src/transformers/trainer.py", line 459, in __init__
self.create_accelerator_and_postprocess()
File "/Users/rbhan/Data/AIHub/Trans-Penv/transformers/src/transformers/trainer.py", line 5071, in create_accelerator_and_postprocess
self.accelerator = Accelerator(**args)
^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/AIHub/Trans-Penv/accelerate/src/accelerate/accelerator.py", line 495, in __init__
raise ValueError(f"fp16 mixed precision requires a GPU (not {self.device.type!r}).")
ValueError: fp16 mixed precision requires a GPU (not 'mps').
### Alternatives
So when can pytorch support fp16 on MPS?I have waited for one year,but no solution.Thank you!
### Additional context
_No response_
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,796,906,847
|
Bracket indexing not working
|
moghadas76
|
open
|
[
"needs reproduction",
"triaged",
"module: advanced indexing"
] | 2
|
NONE
|
### 🐛 Describe the bug
Unsqueezing not working
```python
import torch
tn = torch.randn(6980, 1, 12, 16, 20)
tn[[1], :, :, :, :].shape # (1, 1, 12, 16, 20)
tn[[1], :, [11], :, :].shape # (1, 1, 16, 20)
```
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.3.52 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] easy-torch 1.3.2 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.4.52 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-forecasting 1.2.0 pypi_0 pypi
[conda] pytorch-lightning 2.2.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi
[conda] torch-summary 1.4.5 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.3.0.post0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
| true
|
2,796,906,161
|
Release Pyotrch version 2.6.0 in pypi
|
farzanehnakhaee70
|
closed
|
[
"module: cuda",
"oncall: releng"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently in nvidia/pytorch:24.12, the version of torch which is used is torch 2.6.0. However, it is not yet published in pypi. [Here](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-24-12.html#rel-24-12) is theire release note.
When is possible to publish it in pypi?
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @msaroufim @eqy
| true
|
2,796,787,112
|
PEP585 update - torch/distributed/tensor
|
aorenste
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (ddp)",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145141
See #145101 for details.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,796,786,880
|
PEP585 update - torch/ao/quantization
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"fx",
"release notes: AO frontend"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145140
See #145101 for details.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,796,786,632
|
PEP585 update - torch/_functorch
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: AO frontend"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145139
See #145101 for details.
| true
|
2,796,786,294
|
PEP585 update - torch/_export
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145138
* #145154
See #145101 for details.
| true
|
2,796,786,027
|
PEP585 update - torch/_inductor/[_-i]*
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145137
See #145101 for details.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,600,843
|
[inductor] [bug fix] Fix `conv` on processing uint
|
shaoyuyoung
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 21
|
CONTRIBUTOR
|
Fixes #144314
ut
```
pytest -s -v test/inductor/test_torchinductor.py -k test_conv_errors_with_uint
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,568,899
|
DISABLED test_integers_t1_uint8_np_longlong (__main__.TestArrayFromScalar)
|
izaitsevfb
|
closed
|
[
"skipped"
] | 2
|
CONTRIBUTOR
|
[Test was renamed, broken previously.](https://github.com/pytorch/pytorch/pull/133546#issuecomment-2599333158)
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22torch_np%2Fnumpy_tests%2Fcore%2Ftest_scalar_ctors.py%3A%3ATestArrayFromScalar%3A%3Atest_integers_t1_uint8_np_longlong%22%5D)).
| true
|
2,796,568,673
|
DISABLED test_dtype_passthrough_dtype_complex128 (__main__.TestDLPack)
|
izaitsevfb
|
closed
|
[
"skipped"
] | 2
|
CONTRIBUTOR
|
[Test was renamed, broken previously.](https://github.com/pytorch/pytorch/pull/133546#issuecomment-2599333158)
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22torch_np%2Fnumpy_tests%2Fcore%2Ftest_dlpack.py%3A%3ATestDLPack%3A%3Atest_dtype_passthrough_dtype_complex128%22%5D)).
| true
|
2,796,567,618
|
[inductor] fix MA on poor gpu
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #140249
* __->__ #145133
Found this bug when debugging a MA issue in CI that can not be repro-ed on devgpu.
On GPU with less than 68 SMs (like NVidia L4 used in CI), running torch compile in max-autotune mode may result in the following confusing error https://gist.github.com/shunting314/370f42f547e3367a3773237942725a86 complaining about layout:
```
torch._inductor.exc.InductorError: LoweringException: AssertionError: convert FlexibleLayout to FixedLayout first
```
The reason is, even if we don't pick Triton template, Inductor still returns a MultiTemplateBuffer for tuned addmm. MultiTemplateBuffer.get_reads called from Reduction.num_splits may indexing a FlexibleLayout which results in the error aforementioned.
The issue does not appear on devgpu because we freeze the layout of addmm inputs when rendering triton templates.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,541,647
|
[dynamo] Log guard latency
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145132
* #145509
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,796,529,276
|
[inductor] Fix ignored options for torch.compile
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145131
#139833 broke `torch.compile(options=...)` so that many (all?) options passed in get completely ignored. @alexreinking pointed this out when `options={"cpu_backend":"halide"}` did nothing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,796,525,833
|
[cuBLAS][cuBLASLt] Unify `cuBLASLt` workspaces with `cuBLAS` workspaces
|
eqy
|
closed
|
[
"module: cuda",
"triaged",
"module: cublas",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"module: dynamo",
"ciflow/inductor",
"matrix multiplication",
"ciflow/rocm",
"ci-no-td"
] | 87
|
COLLABORATOR
|
As `cuBLAS` workspaces are already per-stream, there shouldn't be kernel execution overlap with `cuBLASLt` kernels.
This PR reuses `cuBLAS` workspaces for `cuBLASLt` for the following benefits:
+ caching (`cuBLAS` workspaces were already cached, so now we get that for `cuBLASLt`)
+ "free" workspace size bump for `cuBLASLt` `cuBLASLt` workspace sizes were previously smaller than those for `cuBLAS` by default which potentially hurts performance, and we encountered difficulty in increasing the size due to downstream OOMs , see also #120925
+ fixes behavior broken behavior with the memtracker; https://github.com/pytorch/pytorch/pull/139442 attempted to handle peaky allocation behavior that broke memtracker equivalence tests but it didn't seem to fully work, here the cached/reused `cuBLAS` workspace seems to fix it
+ one environment variable to rule them all: `CUBLAS_WORKSPACE_CONFIG` applies directly to `cuBLASLt` without a confusing `CUBLASLT_WORKSPACE_SIZE` that users would also need to consider
Edit: for now, CUBLASLT_WORKSPACE_SIZE still exists to preserve previous behavior (we noticed some accuracy differences when automatically enabling larger workspace for CUBLASLT)
cc @ptrblck @msaroufim @csarofeen @xwang233 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,796,519,027
|
[do not land] check unit tests in test_modules
|
FindHao
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"keep-going"
] | 7
|
MEMBER
|
BE tests
| true
|
2,796,512,793
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 81
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,796,511,696
|
[do not land] check unit tests
|
FindHao
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"keep-going"
] | 7
|
MEMBER
|
BE tests
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.