id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,751,733,450
|
Use default_collate from public API
|
kit1980
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Codemodded via `torchfix . --select=TOR104 --fix`.
This is a step to unblock https://github.com/pytorch/pytorch/pull/141076
| true
|
2,751,731,749
|
[Reland] Add support for bfloat16 atomic adds in fbcode
|
mlazos
|
closed
|
[
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 1
|
CONTRIBUTOR
|
Reland https://github.com/pytorch/pytorch/pull/141857 and fallback on A100 which doesn't have bfloat16 atomic add instrs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,751,673,723
|
Add a test to ensure grads are never inplaced into accidentally
|
janeyx99
|
closed
|
[
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143614
| true
|
2,751,599,946
|
[ROCm] upgrade nightly wheels to rocm6.3 - 2 of 2 (binaries)
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/binaries",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,751,551,041
|
[BE] Add a test to ensure grads are never inplaced into accidentally
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143612
| true
|
2,751,526,962
|
[DynamoBench] Handle accuracy results in benchmark records
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
I discovered this issue when trying to search for the accuracy results on the database and couldn't find any. It turns out that the results is there on the JSON file, for example `"metric": {"name": "accuracy", "benchmark_values": ["pass_due_to_skip"]}`, but inserting them into the database fails because benchmark values is a list of strings here while the expectation is that it's a list of numbers.
ClickHouse doesn't support mix types atm. It has a Variant type https://clickhouse.com/docs/en/sql-reference/data-types/variant, but this isn't recommended by CH team themselves. So, the remaining option is to store this in the `extra_info` field. This field is a dictionary, so it can goes there.
### Testing
https://github.com/pytorch/pytorch/actions/runs/12421747715
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,751,506,767
|
[inductor] Shorten tracebacks for errors inside inductor (by skipping AOTAutograd frames)
|
jansel
|
closed
|
[
"Merged",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143626
* __->__ #143610
* #143552
Before #143552
```py
Traceback (most recent call last):
File "/home/jansel/pytorch/repro.py", line 51, in <module>
fp32_compiled = optimized_model(low_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 1381, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 1165, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 987, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 2870, in run
super().run()
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 1053, in run
while self.step():
^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 963, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 3050, in RETURN_VALUE
self._return(inst)
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 3035, in _return
self.output.compile_subgraph(
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1880, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 1145, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 676, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1758, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 572, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 686, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1044, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1975, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1981, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1912, in codegen
self.scheduler = Scheduler(self.operations)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 1880, in __init__
self._init(nodes)
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 1955, in _init
self.nodes = self.fuse_nodes(self.nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 2461, in fuse_nodes
nodes = self.fuse_nodes_once(nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 2773, in fuse_nodes_once
assert False, "a fake error during fusion"
^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: a fake error during fusion
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
Before this PR
```py
Traceback (most recent call last):
File "/home/jansel/pytorch/repro.py", line 51, in <module>
fp32_compiled = optimized_model(low_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/eval_frame.py", line 580, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1484, in _call_user_compiler
raise BackendCompilerFailed(
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1463, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1880, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 1145, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 676, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1758, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 572, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 686, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1044, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1975, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1981, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1912, in codegen
self.scheduler = Scheduler(self.operations)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 1880, in __init__
self._init(nodes)
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 1955, in _init
self.nodes = self.fuse_nodes(self.nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 2461, in fuse_nodes
nodes = self.fuse_nodes_once(nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 2773, in fuse_nodes_once
assert False, "a fake error during fusion"
^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: a fake error during fusion
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
After this PR
```py
Traceback (most recent call last):
File "/home/jansel/pytorch/repro.py", line 51, in <module>
fp32_compiled = optimized_model(low_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/eval_frame.py", line 580, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 704, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 689, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1138, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1053, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1975, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1981, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1912, in codegen
self.scheduler = Scheduler(self.operations)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 1880, in __init__
self._init(nodes)
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 1955, in _init
self.nodes = self.fuse_nodes(self.nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 2461, in fuse_nodes
nodes = self.fuse_nodes_once(nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 2773, in fuse_nodes_once
assert False, "a fake error during fusion"
^^^^^
torch._inductor.exc.InductorError: AssertionError: a fake error during fusion
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
A large numer of frames are removed between:
```py
File "/home/jansel/pytorch/torch/_dynamo/eval_frame.py", line 580, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 704, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,751,464,821
|
[Dynamo] check node class first for graph dedup
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 6
|
CONTRIBUTOR
|
as title
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,751,462,612
|
Remove assert from partitioner.py
|
pytorchbot
|
closed
|
[
"open source",
"release notes: fx",
"fx"
] | 1
|
COLLABORATOR
|
Remove erroneous assert assuming a dependent (user) node to be in the partition. This partially reverts #136616 by removing the assert.
Tested locally with a failing ExecuTorch Arm test using
```
$ python -m examples.arm.aot_arm_compiler --model_name mv2 --target ethos-u55-128 --delegate --quantize
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,751,423,183
|
[GPT-fast] Support run spcific model or micro-benchmark
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor-micro-benchmark"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143607
| true
|
2,751,380,899
|
remove allow-untyped-defs from torch/distributed/pipelining/_debug.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143606
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,751,380,781
|
remove allow-untyped-defs from torch/distributed/elastic/multiprocessing/errors/handlers.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (torchelastic)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143606
* __->__ #143605
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,751,380,680
|
remove allow-untyped-defs from torch/ao/__init__.py
|
bobrenjc93
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"release notes: AO frontend",
"ci-no-td"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143606
* #143605
* __->__ #143604
| true
|
2,751,380,542
|
remove allow-untyped-defs from torch/_lazy/config.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143606
* #143605
* #143604
* __->__ #143603
| true
|
2,751,380,424
|
remove allow-untyped-defs from torch/fx/experimental/refinement_types.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143606
* #143605
* #143604
* #143603
* __->__ #143602
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,751,380,313
|
remove allow-untyped-defs from torch/ao/quantization/experimental/APoT_tensor.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"release notes: AO frontend"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143606
* #143605
* #143604
* #143603
* #143602
* __->__ #143601
| true
|
2,751,377,294
|
Reuse partial reductions
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143600
Reuse partial reductions for complete reductions. We could expand this to more cover more types of reductions, although we'd have to be a bit more careful about keeping the intermediary, partial reduction in higher precision.
Just doing the ops which do not depend on a higher compute_dtype_precision for now to cover the relevant use case initially.
Fix for https://github.com/pytorch/pytorch/issues/136267. Longer term, we should make sure cooperative reductions fuse partial and complete reductions.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,751,345,839
|
[pipelining] throw error with ZB and compile
|
H-Huang
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (pipeline)"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143386
* __->__ #143599
Zero bubble wil SIGSEGV when operating on a `torch.compile`'d model so raising this error while I am still investigating the cause / design for a fix.
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,751,340,385
|
c10d: no call_guard in init
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
MEMBER
|
`py::call_guard<py::gil_scoped_release>` is not safe when using multiple threads. This instead moves it into the init function which is safe.
For more details see #143593
https://github.com/pybind/pybind11/issues/5473
Test plan:
```
python setup.py develop
```
CI
```py
import time
from concurrent.futures import ThreadPoolExecutor
from torch import distributed as dist
def run():
store = dist.TCPStore(
host_name="localhost",
port=0,
is_master=True,
wait_for_workers=False,
)
# this sleep is required to trigger the crash
time.sleep(0.1)
del store
futures = []
with ThreadPoolExecutor(
max_workers=100,
) as executor:
for i in range(100000):
print(i)
futures.append(executor.submit(run))
if len(futures) > 100:
futures.pop(0).result()
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o
| true
|
2,751,322,914
|
remove explicit aws tokens used in sccache, works via IMDSv2 now
|
wdvr
|
closed
|
[
"Stale",
"topic: not user facing",
"ciflow/xpu"
] | 2
|
CONTRIBUTOR
|
Fixes #143585
| true
|
2,751,316,156
|
[canary] clear speculation log
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143596
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D67471663](https://our.internmc.facebook.com/intern/diff/D67471663)
| true
|
2,751,283,008
|
Improve cond error messaging
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Discovered by @drisspg and I trying out a simple toy example and being way too confused :')
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143595
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,751,221,147
|
[dynamo, 3.13t] raise error if torch.compile is attempted in 3.13t (nogil)
|
williamwen42
|
closed
|
[
"module: dynamo",
"ciflow/inductor",
"module: python version"
] | 1
|
MEMBER
|
https://github.com/pytorch/pytorch/pull/143404 cherry picked for 2.6
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,751,217,284
|
TCPStore crash when initializing from multiple threads
|
d4l3k
|
open
|
[
"oncall: distributed",
"module: c10d",
"bug"
] | 2
|
MEMBER
|
### 🐛 Describe the bug
There's a bug in pybind which is causing TCPStore to crash on deletion when instantiating it from multiple threads.
```
terminate called after throwing an instance of 'std::runtime_error'
what(): pybind11_object_dealloc(): Tried to deallocate unregistered instance!
```
Full repro and stack traces is at: https://gist.github.com/d4l3k/24fd4ac1994ceb4b5a063b125ace1fe3
### Versions
Python 2.5.1, main
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o
| true
|
2,751,205,847
|
[dynamo] guard global autocast state
|
williamwen42
|
closed
|
[
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143592
Fixes https://github.com/pytorch/pytorch/issues/112260.
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,751,203,106
|
c10::string_view -> std::string_view in pytorch
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Test Plan: Sandcastle
Differential Revision: D67312322
| true
|
2,751,201,845
|
[ROCm] Update setup-rocm for almalinux-based images
|
amdfaa
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
CONTRIBUTOR
|
Needed for https://github.com/pytorch/test-infra/pull/6104 and https://github.com/pytorch/ao/pull/999
* Explicitly specify repo and branch in `pytorch/pytorch/.github/actions/diskspace-cleanup@main` to be able to use `setup-rocm` in test-infra's `.github/workflows/linux_job_v2.yml` (like in PR https://github.com/pytorch/test-infra/pull/6104), otherwise Github Actions complains about not finding `diskspace-cleanup` action in `test-infra` repo.
* Use `RUNNER_TEMP` instead of `/tmp`
* Add `bin` group permissions for Almalinux images due to difference in default OS group numbering in Ubuntu vs Almalinux
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,751,189,079
|
[pipelining] Detach output and losses returned to the user
|
Adrien-AM
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Stale",
"release notes: distributed (pipeline)"
] | 4
|
CONTRIBUTOR
|
Issue #142229, followup to PR #142237
The output and losses returned to the user by the last stage are now detached from the rest of the graph. Therefore the tensors do not need to be detached in-place during backward, which was causing issues with views.
I also added tests for memory usage of schedules, with additional tests for models with views as outputs and losses using views.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,751,127,223
|
Full BFGS optimizer
|
PeaBrane
|
open
|
[
"module: optimizer",
"triaged"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently, the torch optimizer supports [LBFGS](https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html), which is a limited-memory version of the full [BFGS](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm#:~:text=In%20numerical%20optimization%2C%20the%20Broyden,solving%20unconstrained%20nonlinear%20optimization%20problems.) optimizer. Admittedly, the BFGS optimizer requires O(n^2) memory for storing the running Hessian, so it is totally impractical for training moderately sized neural networks. However, I'm currently working on some social science projects which requires running regression models where the dataset is large, but the number of regression coefficients (trainable weights) are small, usually under 100. In this case, I think the BFGS optimizer is the perfect fit, because:
- The optimization space is small dimensional
- The problem is usually convex
- Stability of convergence and precision of the (local) minimum is very important
Actually, I think in this case the full BFGS optimizer can even be more efficient compared to LBFGS.
JAX currently has an implementation of [BFGS](https://github.com/jax-ml/jax/blob/main/jax/_src/scipy/optimize/bfgs.py), which I'm currently using, but the ecosystem of JAX is very awkward and I'd personally prefer using torch for everything.
### Alternatives
_No response_
### Additional context
This is potentially not a good fit for torch because I believe it's really meant for training neural networks, and not solving convex regression problems. However, if BFGS can reuse components from the existing LBFGS implementation, maybe it's worth pursuing.
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,751,051,300
|
fix formatting in programming model doc
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Test Plan: Some of the formatting in https://docs-preview.pytorch.org/pytorch/pytorch/143546/export.programming_model.html is broken.
Differential Revision: D67458972
| true
|
2,750,957,476
|
update expected results
|
laithsakka
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143586
update results based on small regression added by
https://github.com/pytorch/pytorch/commit/17b71e5d6a8a45c33e01231e38056e7da5857c88
the max we was 1.25%. for sum_floor_div
<img width="842" alt="Screenshot 2024-12-19 at 9 04 30 AM" src="https://github.com/user-attachments/assets/6ce913cd-110d-4837-af59-08fb6a0dd12d" />
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,750,808,558
|
[CI] XPU linux ci test has flaky error with sccache service
|
chuanqi129
|
closed
|
[
"module: ci",
"triaged",
"module: infra",
"intel"
] | 3
|
COLLABORATOR
|
Noticed that there are some torch xpu ci test failure with sccache caused by the S3 token permission expired, refer https://github.com/pytorch/pytorch/actions/runs/12374649660/job/34540161843#step:14:3461
```
sccache: error: Server startup failed: cache storage failed to read: Unexpected (permanent) at read => S3Error { code: "ExpiredToken", message: "The provided token has expired.", resource: "", request_id: "NMJ4H2V91GQ7S2BZ" }
```
The workflow is https://github.com/pytorch/pytorch/blob/main/.github/workflows/_xpu-test.yml runs on self-hosted runner `linux.idc.xpu` with docker containers
cc @seemethere @malfet @pytorch/pytorch-dev-infra @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,750,759,085
|
remove allow-untyped-defs from torch/utils/_config_typing.pyi
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143584
* #143583
* #143582
| true
|
2,750,758,952
|
remove allow-untyped-defs from distributed/tensor/experimental/__init__.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143583
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,750,758,806
|
remove allow-untyped-defs from torch/ao/quantization/experimental/fake_quantize_function.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"release notes: AO frontend"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143584
* #143583
* __->__ #143582
| true
|
2,750,662,910
|
Unskipped multiple inductor tests for ROCm
|
iupaikov-amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/rocm",
"ci-no-td",
"ciflow/inductor-rocm"
] | 20
|
CONTRIBUTOR
|
All of them should be fine to run now after the triton fix.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,750,637,701
|
RuntimeError: expect_autograd_hooks_ INTERNAL ASSERT FAILED at "../torch/csrc/distributed/c10d/reducer.cpp"
|
alita-moore
|
open
|
[
"oncall: distributed",
"module: ddp"
] | 5
|
NONE
|
### 🐛 Describe the bug
When using `DistributedDataParallel` (DDP) with `static_graph=True` and multiple backward passes on the same forward pass within a `no_sync()` context, a runtime error may occur. Specifically, if the very first forward/backward call sequence on the model is made within a `no_sync()` block and involves calling `backward(retain_graph=True)` (also occurs when calling with `retain_graph=False`) on one loss followed by a second backward call on another loss derived from the same forward pass, an internal PyTorch assertion error can be triggered. This issue does not occur if a normal forward/backward pass is performed first (outside of `no_sync()`), and it also does not happen if `no_sync()` is never used.
Run the below scripts by running `torchrun script_name.py`
## Reproduces the error
```python
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.multiprocessing as mp
def func(model):
im = torch.empty(1, 3, 224, 224, device="cuda")
seq = torch.randint(0, 1000, (1, 128), device="cuda").long()
loss, speculative_loss = model(im, seq)
loss.backward(retain_graph=True)
speculative_loss.backward()
def worker(rank, world_size):
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(3*224*224, 10)
def forward(self, im, seq):
out = self.lin(im.flatten(1))
loss = out.mean()
return loss, loss
model = Model().to(rank)
model = DDP(model, device_ids=[rank], static_graph=True)
# This scenario triggers the error
with model.no_sync():
func(model)
if __name__ == "__main__":
worker(0, 1)
```
expected output
```bash
[rank0]: Traceback (most recent call last):
[rank0]: File "/workspaces/data-engine/temp/script_mixed_sync.py", line 39, in <module>
[rank0]: worker(0, 1)
[rank0]: File "/workspaces/data-engine/temp/script_mixed_sync.py", line 32, in worker
[rank0]: func(model)
[rank0]: File "/workspaces/data-engine/temp/script_mixed_sync.py", line 10, in func
[rank0]: loss.backward(retain_graph=True)
[rank0]: File "/workspaces/data-engine/jobs/extractor-training/.venv/lib/python3.10/site-packages/torch/_tensor.py", line 581, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/workspaces/data-engine/jobs/extractor-training/.venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/workspaces/data-engine/jobs/extractor-training/.venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: RuntimeError: expect_autograd_hooks_ INTERNAL ASSERT FAILED at "../torch/csrc/distributed/c10d/reducer.cpp":1603, please report a bug to PyTorch.
```
## Demonstration of it working without `no_sync`
```python
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.multiprocessing as mp
def func(model):
im = torch.empty(1, 3, 224, 224, device="cuda")
seq = torch.randint(0, 1000, (1, 128), device="cuda").long()
loss, speculative_loss = model(im, seq)
loss.backward(retain_graph=True)
speculative_loss.backward()
def worker(rank, world_size):
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(3*224*224, 10)
def forward(self, im, seq):
out = self.lin(im.flatten(1))
loss = out.mean()
return loss, loss
model = Model().to(rank)
model = DDP(model, device_ids=[rank], static_graph=True)
# No no_sync() context - works without error
func(model)
if __name__ == "__main__":
worker(0, 1)
```
## Demonstration of it working when running without `no_sync` first
```python
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.multiprocessing as mp
def func(model):
im = torch.empty(1, 3, 224, 224, device="cuda")
seq = torch.randint(0, 1000, (1, 128), device="cuda").long()
loss, speculative_loss = model(im, seq)
loss.backward(retain_graph=True)
speculative_loss.backward()
def worker(rank, world_size):
dist.init_process_group("nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(3*224*224, 10)
def forward(self, im, seq):
out = self.lin(im.flatten(1))
loss = out.mean()
return loss, loss
model = Model().to(rank)
model = DDP(model, device_ids=[rank], static_graph=True)
func(model)
with model.no_sync():
func(model)
if __name__ == "__main__":
worker(0, 1)
```
[scripts.zip](https://github.com/user-attachments/files/18199631/scripts.zip)
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.36
Python version: 3.10.13 (main, Mar 12 2024, 12:22:40) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L40S
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 64 KiB (2 instances)
L1i cache: 64 KiB (2 instances)
L2 cache: 1 MiB (2 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,750,608,502
|
`assert_size_stride` failing in `_inductor/utils.py` `return model(new_inputs)`
|
bhack
|
open
|
[
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 13
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I was trying to create a custom_ops for Mamba `selective_scan` as suggested by @ezyang at https://github.com/pytorch/pytorch/issues/130150#issuecomment-2211312921
So I've prepared https://github.com/state-spaces/mamba/pull/651 to extend the original test to the `custom_op` version. Custom op tests are passing correctly as the original impl tests but the `torch.compile` version of the `custom_op` is generating these errors.
To reproduce on the PR just run the compiled test
`pyest -k compile tests/ops/test_selective_scan.py`
### Error logs
```python
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-128-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==8192 at dim=0; expected size 4==4, stride 16==2048 at dim=1; expected size 1==128, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-256-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==16384 at dim=0; expected size 4==4, stride 16==4096 at dim=1; expected size 1==256, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-512-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==32768 at dim=0; expected size 4==4, stride 16==8192 at dim=1; expected size 1==512, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-1024-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==65536 at dim=0; expected size 4==4, stride 16==16384 at dim=1; expected size 1==1024, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-2048-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==131072 at dim=0; expected size 4==4, stride 16==32768 at dim=1; expected size 1==2048, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-1-True-True-True-True-True-4096-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 128==262144 at dim=0; expected size 4==4, stride 32==65536 at dim=1; expected size 2==4096, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-128-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==8192 at dim=0; expected size 4==4, stride 16==2048 at dim=1; expected size 1==128, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-256-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==16384 at dim=0; expected size 4==4, stride 16==4096 at dim=1; expected size 1==256, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-512-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==32768 at dim=0; expected size 4==4, stride 16==8192 at dim=1; expected size 1==512, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-1024-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==65536 at dim=0; expected size 4==4, stride 16==16384 at dim=1; expected size 1==1024, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-2048-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 64==131072 at dim=0; expected size 4==4, stride 16==32768 at dim=1; expected size 1==2048, stride 16==16 at dim=2
FAILED tests/ops/test_selective_scan.py::test_selective_scan[True-True-2-True-True-True-True-True-4096-itype0-wtype0-compiled] - AssertionError: expected size 2==2, stride 128==262144 at dim=0; expected size 4==4, stride 32==65536 at dim=1; expected size 2==4096, stride 16==16 at dim=2
```
The origin was quite similar to all the failing tests. Just to paste one
```python
op_impl = <function selective_scan_fn_custom_op at 0x7f64c61bdee0>, is_variable_B = True, is_variable_C = True, varBC_groups = 2, has_D = True, has_z = True, has_delta_bias = True, delta_softplus = True, return_last_state = True, seqlen = 2048, itype = torch.float32, wtype = torch.float32
@pytest.mark.parametrize(
"op_impl",
[
selective_scan_fn,
selective_scan_fn_custom_op,
torch.compile(selective_scan_fn_custom_op),
],
ids=["original", "custom", "compiled"],
)
# @pytest.mark.parametrize('wtype', [torch.float32, torch.complex64])
@pytest.mark.parametrize("wtype", [torch.float32])
# @pytest.mark.parametrize('itype', [torch.float32, torch.float16, torch.bfloat16])
@pytest.mark.parametrize("itype", [torch.float32])
# @pytest.mark.parametrize('seqlen', [8, 16, 32, 64, 128, 256, 372, 512, 784, 1024, 1134, 2048, 4096])
@pytest.mark.parametrize("seqlen", [128, 256, 512, 1024, 2048, 4096])
# @pytest.mark.parametrize("return_last_state", [False, True])
@pytest.mark.parametrize("return_last_state", [True])
# @pytest.mark.parametrize('has_delta_bias', [False, True])
@pytest.mark.parametrize("has_delta_bias", [True])
# @pytest.mark.parametrize('delta_softplus', [False, True])
@pytest.mark.parametrize("delta_softplus", [True])
# @pytest.mark.parametrize('has_z', [False, True])
@pytest.mark.parametrize("has_z", [True])
# @pytest.mark.parametrize('has_D', [False, True])
@pytest.mark.parametrize("has_D", [True])
@pytest.mark.parametrize("varBC_groups", [1, 2])
# @pytest.mark.parametrize("varBC_groups", [1])
# @pytest.mark.parametrize("is_variable_C", [False, True])
@pytest.mark.parametrize("is_variable_C", [True])
# @pytest.mark.parametrize("is_variable_B", [False, True])
@pytest.mark.parametrize("is_variable_B", [True])
def test_selective_scan(
op_impl,
is_variable_B,
is_variable_C,
varBC_groups,
has_D,
has_z,
has_delta_bias,
delta_softplus,
return_last_state,
seqlen,
itype,
wtype,
):
if varBC_groups > 1 and (not is_variable_B or not is_variable_C):
pytest.skip() # This config is not applicable
device = "cuda"
rtol, atol = (6e-4, 2e-3) if itype == torch.float32 else (3e-3, 5e-3)
if itype == torch.bfloat16:
rtol, atol = 3e-2, 5e-2
rtolw, atolw = (1e-3, 1e-3)
if has_z: # If we have z, the errors on the weights seem higher
rtolw = max(rtolw, rtol)
atolw = max(atolw, atol)
# set seed
torch.random.manual_seed(0)
batch_size = 2
dim = 4
dstate = 8
is_complex = wtype == torch.complex64
A = (-0.5 * torch.rand(dim, dstate, device=device, dtype=wtype)).requires_grad_()
if not is_variable_B:
B_shape = (dim, dstate)
elif varBC_groups == 1:
B_shape = (batch_size, dstate, seqlen if not is_complex else seqlen * 2)
else:
B_shape = (
batch_size,
varBC_groups,
dstate,
seqlen if not is_complex else seqlen * 2,
)
B = torch.randn(
*B_shape,
device=device,
dtype=wtype if not is_variable_B else itype,
requires_grad=True,
)
if not is_variable_C:
C_shape = (dim, dstate)
elif varBC_groups == 1:
C_shape = (batch_size, dstate, seqlen if not is_complex else seqlen * 2)
else:
C_shape = (
batch_size,
varBC_groups,
dstate,
seqlen if not is_complex else seqlen * 2,
)
C = torch.randn(
*C_shape,
device=device,
dtype=wtype if not is_variable_C else itype,
requires_grad=True,
)
if has_D:
D = torch.randn(dim, device=device, dtype=torch.float32, requires_grad=True)
else:
D = None
if has_z:
z = torch.randn(
batch_size, dim, seqlen, device=device, dtype=itype, requires_grad=True
)
else:
z = None
if has_delta_bias:
delta_bias = (
0.5 * torch.rand(dim, device=device, dtype=torch.float32)
).requires_grad_()
else:
delta_bias = None
u = torch.randn(
batch_size, dim, seqlen, device=device, dtype=itype, requires_grad=True
)
delta = (
0.5 * torch.rand(batch_size, dim, seqlen, device=device, dtype=itype)
).requires_grad_()
A_ref = A.detach().clone().requires_grad_()
B_ref = B.detach().clone().requires_grad_()
C_ref = C.detach().clone().requires_grad_()
D_ref = D.detach().clone().requires_grad_() if D is not None else None
z_ref = z.detach().clone().requires_grad_() if z is not None else None
u_ref = u.detach().clone().requires_grad_()
delta_ref = delta.detach().clone().requires_grad_()
delta_bias_ref = (
delta_bias.detach().clone().requires_grad_() if delta_bias is not None else None
)
out, *rest = op_impl(
u,
delta,
A,
B,
C,
D,
z=z,
delta_bias=delta_bias,
delta_softplus=delta_softplus,
return_last_state=return_last_state,
)
if return_last_state:
state = rest[0]
out_ref, *rest = selective_scan_ref(
u_ref,
delta_ref,
A_ref,
B_ref,
C_ref,
D_ref,
z=z_ref,
delta_bias=delta_bias_ref,
delta_softplus=delta_softplus,
return_last_state=return_last_state,
)
if return_last_state:
state_ref = rest[0]
# dA = torch.exp(torch.einsum('bdl,dn->bdln', delta, A))
# dt_u = delta * u
print(f"Output max diff: {(out - out_ref).abs().max().item()}")
print(f"Output mean diff: {(out - out_ref).abs().mean().item()}")
assert torch.allclose(out, out_ref, rtol=rtol, atol=atol)
if return_last_state:
print(f"State max diff: {(state - state_ref).abs().max().item()}")
assert torch.allclose(state, state_ref, rtol=rtol, atol=atol)
g = torch.randn_like(out)
out_ref.backward(g)
> out.backward(g)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/conda/lib/python3.11/site-packages/torch/_tensor.py:648: in backward
torch.autograd.backward(
/opt/conda/lib/python3.11/site-packages/torch/autograd/__init__.py:347: in backward
_engine_run_backward(
/opt/conda/lib/python3.11/site-packages/torch/autograd/graph.py:823: in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
/opt/conda/lib/python3.11/site-packages/torch/autograd/function.py:307: in apply
return user_fn(self, *args)
/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py:1958: in backward
return impl_fn()
/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py:1944: in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py:2079: in _backward_impl
out = call_func_at_runtime_with_args(
/opt/conda/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py:126: in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
/opt/conda/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:751: in _fn
return fn(*args, **kwargs)
/opt/conda/lib/python3.11/site-packages/torch/_inductor/output_code.py:465: in __call__
return self.current_callable(inputs)
/opt/conda/lib/python3.11/site-packages/torch/_inductor/utils.py:2191: in run
return model(new_inputs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = []
def call(args):
primals_2, primals_5, primals_6, primals_7, primals_8, primals_10, primals_11, primals_12, primals_13, primals_16, primals_18, primals_1, primals_3, primals_4, primals_9, primals_14, primals_15, primals_17, primals_19, getitem_2, getitem_3, tangents_1, tangents_2 = args
args.clear()
s0 = primals_2
s1 = primals_5
s2 = primals_6
s3 = primals_7
s4 = primals_8
s5 = primals_10
s6 = primals_11
s7 = primals_12
s8 = primals_13
s9 = primals_16
s10 = primals_18
assert_size_stride(primals_1, (4, ), (1, ))
assert_size_stride(primals_3, (2, 4, s0), (4*s0, s0, 1))
assert_size_stride(primals_4, (4, ), (1, ))
assert_size_stride(primals_9, (s1, s2, s3, s4), (s2*s3*s4, s3*s4, s4, 1))
assert_size_stride(primals_14, (s5, s6, s7, s8), (s6*s7*s8, s7*s8, s8, 1))
assert_size_stride(primals_15, (4, 8), (8, 1))
assert_size_stride(primals_17, (2, 4, s9), (4*s9, s9, 1))
assert_size_stride(primals_19, (2, 4, s10), (4*s10, s10, 1))
assert_size_stride(getitem_2, (2, 4, s10), (4*s10, s10, 1))
> assert_size_stride(getitem_3, (2, 4, s10, 16), (64*s10, 16*s10, 16, 1))
E AssertionError: expected size 2==2, stride 64==131072 at dim=0; expected size 4==4, stride 16==32768 at dim=1; expected size 1==2048, stride 16==16 at dim=2
/tmp/torchinductor_root/gz/cgzkum44b45xqpnatebwkxq45ixpx4p4cpxq7ucx3tkpkwahog3p.py:56: AssertionError
```
### Versions
stable and nightly
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @bdhirsh
| true
|
2,750,537,283
|
Unskipped multiple inductor tests for ROCm
|
iupaikov-amd
|
closed
|
[
"module: rocm",
"topic: not user facing",
"module: inductor"
] | 6
|
CONTRIBUTOR
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,750,445,769
|
[ONNX] Output order is switched when exporting model phi-2 with or without input cache
|
xadupre
|
closed
|
[
"module: onnx",
"triaged"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
The cache is not flattened in the same order if the cache is given as one of the input when the model is exported. It seems to be introduced by one the passes made by the exporter.
With no input cache, the model (2 layers only) outputs ``key_cache_0, value_cache_0, key_cache_1, value_cache_1``, it becomes ``return (view_34, _to_copy_7, transpose_3, _to_copy_10, transpose_8)``
last line of the fx graph with no decomposition: ``return (linear_12, to_7, transpose_3, to_10, transpose_8)``

With input cache given as input, the model outputs: ``key_cache_0, key_cache_1, value_cache_0, value_cache_1``
last line of the fx graph with no decomposition: ``return (linear_12, to_7, cat_6, to_10, cat_12)`` but it becomes ``return (view_34, _to_copy_7, _to_copy_10, cat_6, cat_12)``

### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241218+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241218+cu126
[pip3] torchaudio==2.6.0.dev20241218+cu126
[pip3] torchvision==0.22.0.dev20241218+cu126
[conda] Could not collect
```
| true
|
2,750,416,518
|
Remove unneeded std::make_optional
|
cyyever
|
closed
|
[
"oncall: jit",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: jit",
"module: dynamo",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,750,198,826
|
Torch elastic restart fails with torch-2.6.0 nightly build: NCCL unhandled system error
|
shujue
|
open
|
[
"oncall: distributed",
"module: nccl"
] | 2
|
NONE
|
### 🐛 Describe the bug
**Summary:**
Run multi-gpu training with torch elastic run, backend is nccl,
Torch 2.5.1, job restarted successfully
Nightly torch 2.6.0, same nccl version as above, after job restarted, NCCL error is reported:
`[rank1]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:77, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5`
**Reproduce:**
Here is a minimal bug repro.
```
import os
import random
import sys
import time
import torch
def main():
local_rank = int(os.environ['LOCAL_RANK'])
device = torch.device('cuda')
torch.cuda.set_device(local_rank)
torch.distributed.init_process_group(backend='nccl', init_method='env://')
rank = torch.distributed.get_rank()
if rank == 0:
print("#### NEW RUN ###")
device_ids = [local_rank]
torch.distributed.barrier(device_ids=device_ids)
torch.distributed.destroy_process_group()
sys.exit(123) # force torchrun restart
if __name__ == "__main__":
main()
```
1. Inside a container, for example: pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
2. Install cuSPARSELt when necessary (not related to this issue)
3. Install torch 2.6.0 without changing other libraries: `pip install --no-deps torch-2.6.0.dev20241218+cu124-cp311-cp311-linux_x86_64.whl`
4. Run: `torchrun --nproc-per-node=2 --max-restarts=1 ./repro.py`
**Output:**
Expected result:
2 runs exit with code 123
What happens (sometimes 2-3 attempts are needed):
```
[rank1]: Traceback (most recent call last):
[rank1]: File "/workspace/./repro.py", line 57, in <module>
[rank1]: main()
[rank1]: File "/workspace/./repro.py", line 50, in main
[rank1]: torch.distributed.barrier(device_ids=device_ids)
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
[rank1]: work = group.barrier(opts=opts)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/NCCLUtils.cpp:77, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.21.5
[rank1]: ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
[rank1]: Last error:
[rank1]: socketStartConnect: Connect to 172.17.0.2<35553> failed : Software caused connection abort
```
### Versions
The environment is pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
Without modifying any packages, the original script works as expected (restarted for 1 time, and exit code 123 twice)
Below is the failed environment, only changes are:
1. cuSPARSELt 0.6.3 installed to enable torch import
2. pip install --no-deps torch-2.6.0.dev20241218+cu124-cp311-cp311-linux_x86_64.whl
(from https://download.pytorch.org/whl/nightly/torch/)
```
PyTorch version: 2.6.0.dev20241218+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.92
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.6.0.dev20241218+cu124
[pip3] torchaudio==2.5.1+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0.dev20241218+cu124 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,750,099,025
|
register_backward_pre_hook wrong
|
pxp511
|
closed
|
[
"module: autograd",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
i test hooks by UNet2DConditionModel like this:
```
def forward_pre_hook(module, input, output=None):
print_rank_0("forward_pre_hook")
def forward_hook(module, input, output=None):
print_rank_0("forward_hook")
def backward_pre_hook(module, input, output=None):
print_rank_0("backward_pre_hook")
def backward_hook(module, input, output=None):
print_rank_0("backward_hook")
for name, module in unet.named_modules():
if name == "module.up_blocks.3.attentions.2.transformer_blocks.0.ff.net.2":
module.register_forward_pre_hook(forward_pre_hook)
module.register_forward_hook(forward_hook)
module.register_full_backward_pre_hook(backward_pre_hook)
module.register_full_backward_hook(backward_hook)
```
but get unexpected result(redundant forward_pre_hook):

2024-12-19 04:49:19 (68390) [INFO] forward_pre_hook
2024-12-19 04:49:19 (68390) [INFO] forward_hook
2024-12-19 04:49:19 (68390) [INFO] backward_pre_hook
2024-12-19 04:49:19 (68390) [INFO] forward_pre_hook
2024-12-19 04:49:19 (68390) [INFO] backward_hook
only one module——"module.up_blocks.3.attentions.2.transformer_blocks.0.ff.net.2" have the bug
### Versions
Name: torch
Version: 2.1.0
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: packages@pytorch.org
License: BSD-3
Name: diffusers
Version: 0.31.0
Summary: State-of-the-art diffusion in PyTorch and JAX.
Home-page: https://github.com/huggingface/diffusers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/diffusers/graphs/contributors)
Author-email: diffusers@huggingface.co
License: Apache 2.0 License
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,750,072,340
|
cuda graphs produce two additional kernel calls
|
trporn
|
closed
|
[
"triaged",
"module: cuda graphs"
] | 4
|
NONE
|
### 🐛 Describe the bug
When using cuda graph capture, the replay() function produces two additional kernel calls before the launchGraph.
Additional calls are to fillFunctor, probably the result of replay_prologue(), line 229 in https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/cuda/CUDAGraph.cpp.
This is unexpected behavior that makes graphs a non-viable option for smaller code sections.
Use nsys profile -t cuda python file.py
on the following code to see the problem.
```
import torch
N, D_in, H, D_out = 640, 4096, 2048, 1024
model = torch.nn.Linear(D_in, H).cuda()
# Placeholders used for capture
static_input = torch.randn(N, D_in, device='cuda')
static_target = torch.randn(N, D_out, device='cuda')
# warmup
s = torch.cuda.Stream()
s.wait_stream(torch.cuda.current_stream())
with torch.cuda.stream(s):
for i in range(3):
y_pred = model(static_input)
torch.cuda.current_stream().wait_stream(s)
# capture
g = torch.cuda.CUDAGraph()
with torch.cuda.graph(g):
static_y_pred = model(static_input)
real_inputs = [torch.rand_like(static_input) for _ in range(100)]
real_targets = [torch.rand_like(static_target) for _ in range(100)]
for data, target in zip(real_inputs, real_targets):
static_input.copy_(data)
static_target.copy_(target)
g.replay()
```
### Versions
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.0 (main, Mar 1 2023, 18:26:19) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 3500 Ada Generation Laptop GPU
Nvidia driver version: 553.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] nvtx==0.2.10
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.1
[pip3] torchmetrics==1.6.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @mcarilli @ezyang @eellison @penguinwu
| true
|
2,750,056,673
|
Issues linking to libtorch on M2 mac
|
jatkinson1000
|
open
|
[
"triaged",
"module: arm"
] | 6
|
NONE
|
### 🐛 Describe the bug
I am following the minimal example for compiling libtorch provided here: https://pytorch.org/cppdocs/installing.html
I am using libtorch for mac downloaded from here: https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.5.1.zip
This looks like it should be for arm64 and is the latest version from the PyTorch homepage.
I can run CMake fine, but am encountering issues when it comes to building and linking the c++.
```
[user@Mac build]$ cmake --build . --config Release
[ 50%] Building CXX object CMakeFiles/example.dir/example.cpp.o
[100%] Linking CXX executable example
Undefined symbols for architecture arm64:
"at::_ops::rand::call(c10::ArrayRef<c10::SymInt>, std::optional<c10::ScalarType>, std::optional<c10::Layout>, std::optional<c10::Device>, std::optional<bool>)", referenced from:
_main in example.cpp.o
"at::print(std::ostream&, at::Tensor const&, long long)", referenced from:
_main in example.cpp.o
"c10::TensorImpl::set_autograd_meta(std::unique_ptr<c10::AutogradMetaInterface, std::default_delete<c10::AutogradMetaInterface>>)", referenced from:
torch::autograd::make_variable(at::Tensor, bool, bool) in example.cpp.o
torch::autograd::make_variable(at::Tensor, bool, bool) in example.cpp.o
torch::autograd::make_variable(at::Tensor, bool, bool) in example.cpp.o
torch::autograd::make_variable(at::Tensor, bool, bool) in example.cpp.o
"c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>> const&)", referenced from:
c10::fromIntArrayRefSlow(c10::ArrayRef<long long>) in example.cpp.o
ld: symbol(s) not found for architecture arm64
collect2: error: ld returned 1 exit status
make[2]: *** [example] Error 1
make[1]: *** [CMakeFiles/example.dir/all] Error 2
make: *** [all] Error 2
```
This seems to be similar to what happened when there was no arm binary available and we had to build libtorch from source (see #110810), but more recently binaries have been available and worked fine.
I have been looking through various issues and can't find anything similar, so apolofied if I've missed something, but to my knowledge I am just following the example from the website.
Note that this is using gcc-14 installed via homebrew which has worked fine in the past.
If I try and build without specifying a CXX or C compiler it defaults to AppleClang 16.0.0.16000026 and I get a different error of:
```
[jwa34@Mac build]$ cmake --build . --config Release
[ 50%] Building CXX object CMakeFiles/example.dir/example.cpp.o
In file included from /Users/jwa34/libtorch_test/example.cpp:1:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/api/include/torch/torch.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/api/include/torch/all.h:7:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/api/include/torch/autograd.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/autograd/autograd.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/autograd/variable.h:6:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/autograd/cpp_hook.h:2:
In file included from /Users/jwa34/libtorch_test/libtorch/include/torch/csrc/autograd/function_hook.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/ATen/Tensor.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/ATen/core/Tensor.h:3:
In file included from /Users/jwa34/libtorch_test/libtorch/include/ATen/core/TensorBody.h:11:
In file included from /Users/jwa34/libtorch_test/libtorch/include/c10/core/Device.h:3:
/Users/jwa34/libtorch_test/libtorch/include/c10/core/DeviceType.h:10:10: fatal error: 'cstddef' file not found
10 | #include <cstddef>
| ^~~~~~~~~
1 error generated.
make[2]: *** [CMakeFiles/example.dir/example.cpp.o] Error 1
make[1]: *** [CMakeFiles/example.dir/all] Error 2
make: *** [all] Error 2
```
I have seen this sort of thing before where the default AppleClang becomes out of date, hence my decision to use gcc.
Note further that this behaviour was observed first in a larger project, but I decided to use the minimal example to try and pin down where things were going wrong.
The same error is occuring in the CI for that larger project and can be seen at e.g. https://github.com/Cambridge-ICCS/FTorch/actions/runs/12399866549/job/34615725295?pr=164
Many thanks for your assistance.
### Versions
Using libtorch 2.5.1 downloaded from https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.5.1.zip
cc @malfet @snadampal @milpuz01
| true
|
2,750,033,620
|
[ROCm] Guard triton backend call around cuda.is_available
|
jataylo
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 8
|
COLLABORATOR
|
To resolve: https://github.com/pytorch/test-infra/issues/6082
Calling into Triton's get_backend_options will initialise CUDA and break CPU-only environments that may have hip installed.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,750,010,063
|
[ROCm] Add miopen_batch_norm to meta_registrations to fix AOTI issue
|
jataylo
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"ciflow/rocm"
] | 7
|
COLLABORATOR
|
Currently the upstream example for AOTI usage breaks on ROCm (https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html)
```
File "/root/upstream/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: unsupported operator: aten.miopen_batch_norm.default (see https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0 for how to fix)
from user code:
File "/root/vision/torchvision/models/resnet.py", line 285, in forward
return self._forward_impl(x)
File "/root/vision/torchvision/models/resnet.py", line 269, in _forward_impl
x = self.bn1(x)
```
This PR adds a meta_registration for miopen_batch_norm to resolve this issue
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd
| true
|
2,749,998,844
|
[Inductor][CPU] C++ compile error: no known conversion from `VecMask` to `VectorizedN`
|
guangyey
|
closed
|
[
"triaged",
"oncall: cpu inductor"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
It will raise a C++ compile error when run the following reproducer.
```python
import torch
def demo():
# Input tensors that are generated randomly
torch.manual_seed(777)
in_self_ln39273 = (
torch.randn(size=[128, 2501], device="cpu").ge(0)
)
def fwd_subgraph():
# pytorch op calls encoded from aten functions
res_ln39273_0, res_ln39273_1 = torch.max(
in_self_ln39273, 1, False
) # traced line: 39273
return res_ln39273_0, res_ln39273_1
sg_callable = torch.compile(fwd_subgraph)
with torch.amp.autocast('cpu', enabled=True, dtype=torch.bfloat16):
res_ln39273_0, res_ln39273_1 = sg_callable()
return {"res_ln39273_0": res_ln39273_0, "res_ln39273_1": res_ln39273_1}
print(demo())
```
The output is
```bash
E1219 03:07:31.455000 3547855 stock-pytorch/torch/_dynamo/repro/after_aot.py:119] [0/0] CompilerError
Traceback (most recent call last):
File "/home/pt-gpu/4T-4652/guangyey/test1.py", line 23, in <module>
print(dlrminfer_compile_bf16_step_1_fwd_max_rfid_39273())
File "/home/pt-gpu/4T-4652/guangyey/test1.py", line 20, in dlrminfer_compile_bf16_step_1_fwd_max_rfid_39273
res_ln39273_0, res_ln39273_1 = sg_callable()
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/convert_frame.py", line 1387, in __call__
return self._torchdynamo_orig_callable(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/convert_frame.py", line 1171, in __call__
result = self._inner_convert(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/convert_frame.py", line 548, in __call__
return _compile(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/convert_frame.py", line 988, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/convert_frame.py", line 716, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/convert_frame.py", line 751, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/convert_frame.py", line 232, in _fn
return fn(*args, **kwargs)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/convert_frame.py", line 663, in transform
tracer.run()
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/symbolic_convert.py", line 2870, in run
super().run()
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/symbolic_convert.py", line 1053, in run
while self.step():
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/symbolic_convert.py", line 963, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/symbolic_convert.py", line 3050, in RETURN_VALUE
self._return(inst)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/symbolic_convert.py", line 3035, in _return
self.output.compile_subgraph(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/output_graph.py", line 1136, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/compile_fx.py", line 1880, in compile_fx
return aot_autograd(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_functorch/aot_autograd.py", line 1145, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
compiled_fn = dispatch_and_compile()
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 201, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/compile_fx.py", line 1758, in fw_compiler_base
return inner_compile(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/compile_fx.py", line 572, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/compile_fx.py", line 686, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/compile_fx.py", line 1044, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/graph.py", line 1978, in compile_to_module
return self._compile_to_module()
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/graph.py", line 2019, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/codecache.py", line 2756, in load_by_key_path
mod = _reload_python_module(key, path)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/runtime/compile_tasks.py", line 45, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_pt-gpu/ya/cyarcc2jewtxa72ffr36zkk7ofyedvzpifamfhkv4rbglo3ros5w.py", line 76, in <module>
async_compile.wait(globals())
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/async_compile.py", line 306, in wait
scope[key] = result.result()
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/codecache.py", line 3250, in result
return self.result_fn()
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/codecache.py", line 2250, in future
result = get_result()
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/codecache.py", line 2040, in load_fn
future.result()
File "/home/pt-gpu/4T-4652/envs/ygy_stock/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/home/pt-gpu/4T-4652/envs/ygy_stock/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/pt-gpu/4T-4652/envs/ygy_stock/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/codecache.py", line 2081, in _worker_compile_cpp
cpp_builder.build()
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/cpp_builder.py", line 1544, in build
status = run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/cpp_builder.py", line 345, in run_compile_cmd
return _run_compile_cmd(cmd_line, cwd)
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/_inductor/cpp_builder.py", line 339, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CppCompileError: C++ compile error
Command:
g++ /tmp/torchinductor_pt-gpu/hj/chjaf7w6mzln3blgtjqbhumxw3gvjxyspdn5nfsz663nfp5rxqvv.cpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -D CPU_CAPABILITY_AVX512 -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/home/pt-gpu/4T-4652/envs/ygy_stock/include/python3.10 -I/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/include -I/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/include/torch/csrc/api/include -I/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/include/TH -I/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/include/THC -mavx512f -mavx512dq -mavx512vl -mavx512bw -mfma -mamx-tile -mamx-bf16 -mamx-int8 -D_GLIBCXX_USE_CXX11_ABI=1 -ltorch -ltorch_cpu -ltorch_python -lgomp -L/home/pt-gpu/4T-4652/envs/ygy_stock/lib -L/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/lib -o /tmp/torchinductor_pt-gpu/hj/chjaf7w6mzln3blgtjqbhumxw3gvjxyspdn5nfsz663nfp5rxqvv.so
Output:
/tmp/torchinductor_pt-gpu/hj/chjaf7w6mzln3blgtjqbhumxw3gvjxyspdn5nfsz663nfp5rxqvv.cpp: In function ‘void kernel(const bool*, bool*, int64_t*)’:
/tmp/torchinductor_pt-gpu/hj/chjaf7w6mzln3blgtjqbhumxw3gvjxyspdn5nfsz663nfp5rxqvv.cpp:23:80: error: no matching function for call to ‘argmax_combine_vec<bool, 1, 2, true>(IndexValueVec<bool, 1, 2>&, at::vec::CPU_CAPABILITY::VecMask<float, 1>&, int64_t&)’
23 | tmp_acc1_vec = argmax_combine_vec<bool, 1, 2, true>(tmp_acc1_vec, tmp0, x1);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~
In file included from /tmp/torchinductor_pt-gpu/hj/chjaf7w6mzln3blgtjqbhumxw3gvjxyspdn5nfsz663nfp5rxqvv.cpp:2:
/tmp/torchinductor_pt-gpu/2r/c2rnilspx43ivnzu4uieul65kx65dfhfbptbh5og4wk6rqebuxoo.h:396:34: note: candidate: ‘IndexValueVec<T, NV, NI>& argmax_combine_vec(IndexValueVec<T, NV, NI>&, at::vec::CPU_CAPABILITY::VectorizedN<T, N>, int64_t, std::optional<long int>) [with T = bool; int NV = 1; int NI = 2; bool horizontal = true; int64_t = long int]’
396 | inline IndexValueVec<T, NV, NI>& argmax_combine_vec(IndexValueVec<T, NV, NI>& a, at::vec::VectorizedN<T, NV> next_value, int64_t next_index, std::optional<int64_t> tail_size = std::nullopt){
| ^~~~~~~~~~~~~~~~~~
/tmp/torchinductor_pt-gpu/2r/c2rnilspx43ivnzu4uieul65kx65dfhfbptbh5og4wk6rqebuxoo.h:396:110: note: no known conversion for argument 2 from ‘at::vec::CPU_CAPABILITY::VecMask<float, 1>’ to ‘at::vec::CPU_CAPABILITY::VectorizedN<bool, 1>’
396 | inline IndexValueVec<T, NV, NI>& argmax_combine_vec(IndexValueVec<T, NV, NI>& a, at::vec::VectorizedN<T, NV> next_value, int64_t next_index, std::optional<int64_t> tail_size = std::nullopt){
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~
/tmp/torchinductor_pt-gpu/2r/c2rnilspx43ivnzu4uieul65kx65dfhfbptbh5og4wk6rqebuxoo.h:435:34: note: candidate: ‘template<class T, int NV, int NI> IndexValueVec<T, NV, NI>& argmax_combine_vec(IndexValueVec<T, NV, NI>&, const IndexValueVec<T, NV, NI>&, std::optional<long int>)’
435 | inline IndexValueVec<T, NV, NI>& argmax_combine_vec(IndexValueVec<T, NV, NI>& vec_a, const IndexValueVec<T, NV, NI>& vec_b, std::optional<int64_t> tail_size = std::nullopt){
| ^~~~~~~~~~~~~~~~~~
/tmp/torchinductor_pt-gpu/2r/c2rnilspx43ivnzu4uieul65kx65dfhfbptbh5og4wk6rqebuxoo.h:435:34: note: template argument deduction/substitution failed:
/tmp/torchinductor_pt-gpu/hj/chjaf7w6mzln3blgtjqbhumxw3gvjxyspdn5nfsz663nfp5rxqvv.cpp:23:80: error: wrong number of template arguments (4, should be 3)
23 | tmp_acc1_vec = argmax_combine_vec<bool, 1, 2, true>(tmp_acc1_vec, tmp0, x1);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~
In file included from /tmp/torchinductor_pt-gpu/hj/chjaf7w6mzln3blgtjqbhumxw3gvjxyspdn5nfsz663nfp5rxqvv.cpp:2:
/tmp/torchinductor_pt-gpu/2r/c2rnilspx43ivnzu4uieul65kx65dfhfbptbh5og4wk6rqebuxoo.h:435:34: note: provided for ‘template<class T, int NV, int NI> IndexValueVec<T, NV, NI>& argmax_combine_vec(IndexValueVec<T, NV, NI>&, const IndexValueVec<T, NV, NI>&, std::optional<long int>)’
435 | inline IndexValueVec<T, NV, NI>& argmax_combine_vec(IndexValueVec<T, NV, NI>& vec_a, const IndexValueVec<T, NV, NI>& vec_b, std::optional<int64_t> tail_size = std::nullopt){
| ^~~~~~~~~~~~~~~~~~
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+git2d0d447
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Genuine Intel(R) CPU 0000%@
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 5
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] optree==0.13.1
[pip3] pytorch-triton-xpu==3.2.0+gite98b6fcb
[pip3] torch==2.6.0a0+git2d0d447
[conda] numpy 2.2.0 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton-xpu 3.2.0+gite98b6fcb pypi_0 pypi
[conda] torch 2.6.0a0+git2d0d447 dev_0 <develop>
cc @jgong5 @EikanWang
| true
|
2,749,941,004
|
[dynamo] Remove transformers ModelOutput hack
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143567
* #143548
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,749,936,862
|
[Inductor] [CPU] [silent] `bitwise_left_shift` outputs wrong results compared with eager
|
shaoyuyoung
|
closed
|
[
"oncall: pt2",
"module: inductor",
"oncall: cpu inductor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
related to #143555
```python
import torch
import torch.nn as nn
from torch._inductor import config
config.fallback_random = True
torch.use_deterministic_algorithms(True)
torch.manual_seed(42)
class Model(nn.Module):
def __init__(self):
super().__init__()
def forward(self, input, other):
return torch.bitwise_left_shift(input=input, other=other)
input = torch.tensor(1000, dtype=torch.int64)
other = torch.tensor(64, dtype=torch.int64)
inputs = [input, other]
model = Model()
output = model(*inputs)
c_m = torch.compile(model)
c_output = c_m(*inputs)
print(output)
print(c_output)
```
### Error logs
```
tensor(0)
tensor(1000)
```
### Versions
PyTorch version: 2.6.0.dev20241218+cu126
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241218+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-202-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.996
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+gitf9cdf582
[pip3] torch==2.6.0.dev20241218+cu126
[pip3] torchaudio==2.6.0.dev20241218+cu126
[pip3] torchvision==0.22.0.dev20241218+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+gitf9cdf582 pypi_0 pypi
[conda] torch 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241218+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,749,822,052
|
Failed to load TorchScript of SSDLite in android
|
sun-jiao
|
open
|
[
"oncall: mobile"
] | 1
|
NONE
|
### 🐛 Describe the bug
I tried to load SSDLite in Android but always failed with it.
Here is my code for export TorchScript: [sun-jiao/pytorch_ssdlite_export](https://github.com/sun-jiao/pytorch_ssdlite_export).
Use `detection_export.py` to convert the pretrained model to TorchScript. And then use `detection.py` to check if the exported model works fine.
And here is a minimized Android project to reproduce this issue: [sun-jiao/pytorch_detection_example](https://github.com/sun-jiao/pytorch_detection_example).
Copy or move the above exported `ssdlite.pt` to `app/src/main/assets/ssdlite.pt` and run the Android project.
Here is the UI:

Click the button "Load model" and it will crash.
Here is the related log:
```
2024-12-19 17:55:53.534 7148-7148 nativeloader net.sunjiao.pytorchdetectionexample D Load /data/app/~~QEZItQNbBIyxcvyQgPi2uQ==/net.sunjiao.pytorchdetectionexample-n9cinNNg6poEN6AyrBDSpA==/base.apk!/lib/arm64-v8a/libpytorch_jni.so using ns clns-7 from class loader (caller=/data/app/~~QEZItQNbBIyxcvyQgPi2uQ==/net.sunjiao.pytorchdetectionexample-n9cinNNg6poEN6AyrBDSpA==/base.apk!classes5.dex): ok
2024-12-19 17:55:53.535 7148-7148 nativeloader net.sunjiao.pytorchdetectionexample D Load libtorch-code-gen.so using ns clns-7 from class loader (caller=/data/app/~~QEZItQNbBIyxcvyQgPi2uQ==/net.sunjiao.pytorchdetectionexample-n9cinNNg6poEN6AyrBDSpA==/base.apk!classes5.dex): dlopen failed: library "libtorch-code-gen.so" not found
2024-12-19 17:55:53.772 7148-7148 AndroidRuntime net.sunjiao.pytorchdetectionexample E FATAL EXCEPTION: main
Process: net.sunjiao.pytorchdetectionexample, PID: 7148
com.facebook.jni.CppException:
Unknown builtin op: torchvision::nms.
Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.
:
File "code/__torch__/torchvision/ops/boxes.py", line 128
_55 = __torch__.torchvision.extension._assert_has_ops
_56 = _55()
_57 = ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~ <--- HERE
return _57
at org.pytorch.NativePeer.initHybrid(Native Method)
at org.pytorch.NativePeer.<init>(NativePeer.java:27)
at org.pytorch.Module.load(Module.java:28)
at org.pytorch.Module.load(Module.java:38)
at net.sunjiao.pytorchdetectionexample.MainActivityKt.LoadModelButton$lambda$0(MainActivity.kt:64)
at net.sunjiao.pytorchdetectionexample.MainActivityKt.$r8$lambda$tsHf2Yc3D2EpbqvM6adjyUQecUc(Unknown Source:0)
at net.sunjiao.pytorchdetectionexample.MainActivityKt$$ExternalSyntheticLambda0.invoke(D8$$SyntheticClass:0)
at androidx.compose.foundation.ClickablePointerInputNode$pointerInput$3.invoke-k-4lQ0M(Clickable.kt:987)
at androidx.compose.foundation.ClickablePointerInputNode$pointerInput$3.invoke(Clickable.kt:981)
at androidx.compose.foundation.gestures.TapGestureDetectorKt$detectTapAndPress$2$1.invokeSuspend(TapGestureDetector.kt:255)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:179)
at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:168)
at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:474)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl(CancellableContinuationImpl.kt:508)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$default(CancellableContinuationImpl.kt:497)
at kotlinx.coroutines.CancellableContinuationImpl.resumeWith(CancellableContinuationImpl.kt:368)
at androidx.compose.ui.input.pointer.SuspendingPointerInputModifierNodeImpl$PointerEventHandlerCoroutine.offerPointerEvent(SuspendingPointerInputFilter.kt:665)
at androidx.compose.ui.input.pointer.SuspendingPointerInputModifierNodeImpl.dispatchPointerEvent(SuspendingPointerInputFilter.kt:544)
at androidx.compose.ui.input.pointer.SuspendingPointerInputModifierNodeImpl.onPointerEvent-H0pRuoY(SuspendingPointerInputFilter.kt:566)
at androidx.compose.foundation.AbstractClickablePointerInputNode.onPointerEvent-H0pRuoY(Clickable.kt:947)
at androidx.compose.foundation.AbstractClickableNode.onPointerEvent-H0pRuoY(Clickable.kt:795)
at androidx.compose.ui.input.pointer.Node.dispatchMainEventPass(HitPathTracker.kt:317)
at androidx.compose.ui.input.pointer.Node.dispatchMainEventPass(HitPathTracker.kt:303)
at androidx.compose.ui.input.pointer.Node.dispatchMainEventPass(HitPathTracker.kt:303)
at androidx.compose.ui.input.pointer.NodeParent.dispatchMainEventPass(HitPathTracker.kt:185)
at androidx.compose.ui.input.pointer.HitPathTracker.dispatchChanges(HitPathTracker.kt:104)
at androidx.compose.ui.input.pointer.PointerInputEventProcessor.process-BIzXfog(PointerInputEventProcessor.kt:113)
at androidx.compose.ui.platform.AndroidComposeView.sendMotionEvent-8iAsVTc(AndroidComposeView.android.kt:1576)
at androidx.compose.ui.platform.AndroidComposeView.handleMotionEvent-8iAsVTc(AndroidComposeView.android.kt:1527)
at androidx.compose.ui.platform.AndroidComposeView.dispatchTouchEvent(AndroidComposeView.android.kt:1466)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3122)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2803)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3122)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2803)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3122)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2803)
2024-12-19 17:55:53.773 7148-7148 AndroidRuntime net.sunjiao.pytorchdetectionexample E at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3122)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2803)
at com.android.internal.policy.DecorView.superDispatchTouchEvent(DecorView.java:458)
at com.android.internal.policy.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1982)
at android.app.Activity.dispatchTouchEvent(Activity.java:4533)
at com.android.internal.policy.DecorView.dispatchTouchEvent(DecorView.java:416)
at android.view.View.dispatchPointerEvent(View.java:16737)
at android.view.ViewRootImpl$ViewPostImeInputStage.processPointerEvent(ViewRootImpl.java:7974)
at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:7732)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:7128)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:7185)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:7151)
at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:7317)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:7159)
at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:7374)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:7132)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:7185)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:7151)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:7159)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:7132)
at android.view.ViewRootImpl.deliverInputEvent(ViewRootImpl.java:10241)
at android.view.ViewRootImpl.doProcessInputEvents(ViewRootImpl.java:10192)
at android.view.ViewRootImpl.enqueueInputEvent(ViewRootImpl.java:10161)
at android.view.ViewRootImpl$WindowInputEventReceiver.onInputEvent(ViewRootImpl.java:10383)
at android.view.InputEventReceiver.dispatchInputEvent(InputEventReceiver.java:295)
at android.os.MessageQueue.nativePollOnce(Native Method)
at android.os.MessageQueue.next(MessageQueue.java:346)
at android.os.Looper.loopOnce(Looper.java:189)
at android.os.Looper.loop(Looper.java:317)
at android.app.ActivityThread.main(ActivityThread.java:8710)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:582)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:886)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@c93bdeb, androidx.compose.runtime.BroadcastFrameClock@d87a748, StandaloneCoroutine{Cancelling}@16a59e1, AndroidUiDispatcher@6b10306]
2024-12-19 17:55:53.779 7148-7148 Process net.sunjiao.pytorchdetectionexample I Sending signal. PID: 7148 SIG: 9
---------------------------- PROCESS ENDED (7148) for package net.sunjiao.pytorchdetectionexample ----------------------------
```
### Versions
Failed to run it so I'll give environment manually:
```
Collecting environment information...
Traceback (most recent call last):
File "collect_env.py", line 692, in <module>
main()
File "collect_env.py", line 675, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "collect_env.py", line 670, in get_pretty_env_info
return pretty_str(get_env_info())
^^^^^^^^^^^^^^
File "collect_env.py", line 495, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "collect_env.py", line 450, in get_pip_packages
for line in out.splitlines()
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
System environment:
```
-` sunjiao@arch-83al
.o+` -----------------
`ooo/ OS: Arch Linux x86_64
`+oooo: Host: 83AL XiaoXinPro 14 IRH8
`+oooooo: Kernel: 6.12.4-zen1-1-zen
-+oooooo+: Uptime: 1 day, 21 hours, 59 mins
`/:-:++oooo+: Packages: 1794 (pacman)
`/++++/+++++++: Shell: bash 5.2.37
`/++++++++++++++: Resolution: 2880x1800
`/+++ooooooooooooo/` DE: Plasma 6.2.4
./ooosssso++osssssso+` WM: kwin
.oossssso-````/ossssss+` WM Theme: Lavanda-Sea-Light
-osssssso. :ssssssso. Theme: [Plasma], FRESH-Blueberries [GTK2/3]
:osssssss/ osssso+++. Icons: Fluent [Plasma], Fluent [GTK2/3]
/ossssssss/ +ssssooo/- Terminal: konsole
`/ossssso+/:- -:/+osssso+- CPU: 13th Gen Intel i5-13500H (16) @ 4.700GHz
`+sso+:-` `.-/+oso: GPU: Intel Raptor Lake-P [Iris Xe Graphics]
`++:. `-/+/ Memory: 23426MiB / 31816MiB
.` `/
```
Python version:
```
$ python --version
Python 3.12.7
```
Package version:
```
$ pip freeze | grep torch
pytorch-lightning==2.4.0
torch==2.5.1
torchaudio==2.5.1
torchmetrics==1.6.0
torchvision==0.20.1
```
Pytorch android package:
```
api("org.pytorch", "pytorch_android", "2.1.0")
api("org.pytorch", "pytorch_android_torchvision", "2.1.0")
```
```[tasklist]
### Tasks
```
| true
|
2,749,800,628
|
[Bug] Memory leak in C++ libtorch
|
TheophileChampion
|
open
|
[
"module: cpp",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
It seems there is a bug leading to a memory leak in the (C++) libtorch.
```
#include <torch/extension.h>
#include <thread>
class FrameVector {
private:
std::vector<int> data;
public:
FrameVector() {
std::vector<int> data(7056);
this->data = data;
}
};
class FrameTensor {
private:
torch::Tensor data;
public:
FrameTensor() {
this->data = torch::zeros({1, 84, 84});
}
};
template<class T>
void f() {
int capacity = 1000000;
std::vector<std::vector<T>> frames(capacity);
for (auto i = 0; i < capacity + 1000000; i++) {
if (i == capacity) {
std::cout << "buffer is full!" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(2));
std::cout << "restart!" << std::endl;
}
frames[i % capacity].push_back(T());
if (i >= capacity) {
frames[i % capacity].erase(frames[i % capacity].begin());
}
}
}
int main(int argc, char *argv[])
{
f<FrameTensor>(); // needs 34G to fill the replay buffer, then memory increases to around 60G
f<FrameVector>(); // needs 34G to fill the replay buffer, then memory stay constant (as it should)
}
```
The bug only seems to occur when the `torch::Tensor` is stored in nested containers for examples:
- `std::vector<std::vector<T>>`
- `std::vector<std::deque<T>>`
I believe the internal counter that keep track of the number of references to the `torch::Tensor` fail to count the correct number of references. This leads the tensors memory to never be released.
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 Ti
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU(s) scaling MHz: 27%
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512_vnni md_clear flush_l1d arch_capabilities
Virtualisation: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchrl==0.6.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 1.26.4 py311h24aa872_0
[conda] numpy-base 1.26.4 py311hbfb1bba_0
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.3.0 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
cc @jbschlosser
| true
|
2,749,696,858
|
[Inductor] Fix FX Graph Cache with constant adding in lowering
|
leslie-fang-intel
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143563
**Summary**
Fix https://github.com/pytorch/pytorch/issues/143144, the issue happens due to we add new constant into `GraphLowering` when `freezing` turns on. We record these new constants and set it into the loaded Python Module in this PR.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_codecache.py -k test_cpp_max_autotune
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,749,647,878
|
"aten::any" and "aten::all" behavior for unsigned tensors different than signed
|
DanielLevi6
|
open
|
[
"triaged",
"module: reductions",
"module: unsigned int"
] | 3
|
NONE
|
### 🐛 Describe the bug

When I use torch.any ot torch.all for an int8 tensor, the result is boolean as expected for these boolean ops. But, for some reason, when I use uint8 for example, so the result is a uint8 tensor. That's not how I expect it to work. Is there a reason for this behavior?
### Versions
torch: 2.4.0+cu121
python: 3.9.1
| true
|
2,749,535,717
|
Add the max_autotune tests in the periodic jobs.
|
LifengWang
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng",
"module: dynamo"
] | 19
|
CONTRIBUTOR
|
To promptly detect issues with max_autotune, such as [#143102](https://github.com/pytorch/pytorch/issues/143102), add the max_autotune tests to the periodic CI to track the accuracy regularly.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,749,459,223
|
[hop] fix unbacked_bindings meta for while_loop
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143559
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,749,373,941
|
[Don't merge] Xu debug illegal instraction
|
xuhancn
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,749,325,669
|
[Dynamo] Support dict_keys from nested dict object
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143557
* #143547
* #143374
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,749,287,043
|
[caffe2] Move vectorized templates into a separate file for box_cox operator
|
efiks
|
closed
|
[
"caffe2",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary: No functional changes in this diff, the code is moved into a separate file to be reused by avx512 version in the follow up diff.
Test Plan: buck build //caffe2/caffe2/perfkernels:perfkernels
Differential Revision: D67433115
| true
|
2,749,286,212
|
[Inductor] [CPU] [silent] `bitwise_right_shift` outputs wrong results when `other==64`
|
shaoyuyoung
|
closed
|
[
"high priority",
"triage review",
"oncall: pt2",
"module: inductor",
"oncall: cpu inductor"
] | 8
|
CONTRIBUTOR
|
### 🐛 Describe the bug
for a tensor, if `other` is "**out-of-bound**", CPU inductor seems to start shifting right again.
although it is an out-of-bound bug, the incorrectness is silent. I think it is hig-pri (?)
CUDA outputs zero correctly.
```python
import torch
import torch.nn as nn
from torch._inductor import config
config.fallback_random = True
torch.use_deterministic_algorithms(True)
torch.manual_seed(42)
class Model(nn.Module):
def __init__(self):
super().__init__()
def forward(self, input, other):
return torch.bitwise_right_shift(input=input, other=other)
input = torch.tensor(1000, dtype=torch.int64)
other = torch.tensor(64, dtype=torch.int64)
inputs = [input, other]
model = Model()
output = model(*inputs)
c_m = torch.compile(model)
c_output = c_m(*inputs)
print(output)
print(c_output)
```
### Error logs
```
tensor(0)
tensor(1000)
```
### Versions
PyTorch version: 2.6.0.dev20241218+cu126
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241218+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-202-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.996
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+gitf9cdf582
[pip3] torch==2.6.0.dev20241218+cu126
[pip3] torchaudio==2.6.0.dev20241218+cu126
[pip3] torchvision==0.22.0.dev20241218+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+gitf9cdf582 pypi_0 pypi
[conda] torch 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241218+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,749,253,217
|
[DONT MERGE] test libm.lib
|
chuanqi129
|
closed
|
[
"open source",
"ciflow/binaries",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,749,234,921
|
[Draft][WIP] Enable XPU path for FlexAttention
|
liangan1
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"module: dynamo"
] | 22
|
NONE
|
**Motivation**
1. The Attention has been the critical performance bottleneck in the current LLM models, and FlexAttention is a good choice to cover the broad variants in the transformers series models. With FlexAttention, it is easy for us to enable the paged attention and fused SDPA in the transformers repo on XPU device. Besides, it also provide a candidate to process attention in LLM ecosystem libraries ., e.g., vLLM, SGLang on XPU device.
2. FlexAttention is good start point to push the intel triton based GEMM kernel to be matured. FlexAttention provide both flexattention kernel and flexdecoding kernel to cover both compute bound and memory bound GEMM computation, and different shapes should also been supported to serve LLM inference., e.g. head_dim=64, 96, 128, 256.
**What does this PR do?**
1. Enable the device type for Flexattention kernel and UTs to ensure all important UTs pass on XPU device.
2. For E2E model inference, ensure the functionality of LLM models inference with FlexAttention to be ready.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,749,217,881
|
[dynamo] Shorten tracebacks for backend compiler errors
|
jansel
|
closed
|
[
"Merged",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143626
* #143610
* __->__ #143552
Fixes #143406
After this PR the error for missing Triton is:
```py
Traceback (most recent call last):
File "/home/jansel/pytorch/repro.py", line 51, in <module>
fp32_compiled = optimized_model(low_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/eval_frame.py", line 580, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 3624, in create_backend
raise TritonMissing(inspect.currentframe())
torch._dynamo.exc.TritonMissing: Cannot find a working triton installation. Either the package is not installed or it is too old. More information on installing Triton can be found at: https://github.com/triton-lang/triton
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Setting `TORCHDYNAMO_VERBOSE=1` yields something like the old error:
```py
Traceback (most recent call last):
File "/home/jansel/pytorch/repro.py", line 51, in <module>
fp32_compiled = optimized_model(low_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/eval_frame.py", line 580, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 1383, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 1167, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 548, in __call__
return _compile(
^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 988, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 716, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 751, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 232, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/convert_frame.py", line 663, in transform
tracer.run()
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 2870, in run
super().run()
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 1053, in run
while self.step():
^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 963, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 3050, in RETURN_VALUE
self._return(inst)
File "/home/jansel/pytorch/torch/_dynamo/symbolic_convert.py", line 3035, in _return
self.output.compile_subgraph(
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1102, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1383, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1433, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/output_graph.py", line 1463, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1880, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 1145, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 676, in aot_dispatch_autograd
compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1758, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 572, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 686, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/compile_fx.py", line 1044, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1975, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1981, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/graph.py", line 1916, in codegen
self.scheduler.codegen()
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 3667, in codegen
return self._codegen()
^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 3761, in _codegen
if device is not None and self.get_backend(device).ready_to_flush():
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 3631, in get_backend
self.backends[device] = self.create_backend(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_inductor/scheduler.py", line 3624, in create_backend
raise TritonMissing(inspect.currentframe())
torch._dynamo.exc.TritonMissing: Cannot find a working triton installation. Either the package is not installed or it is too old. More information on installing Triton can be found at: https://github.com/triton-lang/triton
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
This PR also strips dynamo stack frames from other types of backend compile errors.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,749,179,797
|
Add cutlass version guard in prep for upgrade
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: sparse"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143551
| true
|
2,749,154,243
|
Fix torch.accelerator api abort when passing invaild device
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"ciflow/rocm",
"ciflow/xpu",
"release notes: xpu",
"module: accelerator"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143550
# Motivation
Fix https://github.com/pytorch/pytorch/issues/143543
# Solution
We should raise python exception instead of aborting...
# Additional Context
without this PR:
```python
>>> import torch
>>> torch.accelerator.current_stream(torch.accelerator.device_count())
terminate called after throwing an instance of 'c10::Error'
what(): device is out of range, device is 2, total number of device is 2.
Exception raised from check_device_index at /home/dvrogozh/git/pytorch/pytorch/c10/xpu/XPUFunctions.h:36 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xac (0x7f30707eb95c in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xf3 (0x7f307078fc57 in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10.so)
frame #2: <unknown function> + 0x19a3e (0x7f3070c2ba3e in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10_xpu.so)
frame #3: c10::xpu::getCurrentXPUStream(signed char) + 0x2f (0x7f3070c2c83f in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10_xpu.so)
frame #4: <unknown function> + 0x1ca35 (0x7f3070c2ea35 in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10_xpu.so)
frame #5: <unknown function> + 0x653f15 (0x7f3083391f15 in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0x39e5f2 (0x7f30830dc5f2 in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libtorch_python.so)
<omitting python frames>
frame #20: <unknown function> + 0x29d90 (0x7f308b19bd90 in /lib/x86_64-linux-gnu/libc.so.6)
frame #21: __libc_start_main + 0x80 (0x7f308b19be40 in /lib/x86_64-linux-gnu/libc.so.6)
Aborted (core dumped)
```
with this PR:
```python
>>> import torch
>>> torch.accelerator.current_stream(torch.accelerator.device_count())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pt-gpu/4T-4652/guangyey/stock-pytorch/torch/accelerator/__init__.py", line 123, in current_stream
return torch._C._accelerator_getStream(device_index)
RuntimeError: The device index is out of range. It must be in [0, 2), but got 2.
```
cc @albanD @EikanWang
| true
|
2,749,140,514
|
[reland][AMD] Turn on TF32 for aten::mm
|
xw285cornell
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/rocm",
"ci-no-td"
] | 20
|
CONTRIBUTOR
|
Summary:
hipblaslt supports TF32, so adding the support.
Original PR https://github.com/pytorch/pytorch/pull/139869
Test Plan: CI
Differential Revision: D67431681
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,749,133,435
|
[dynamo] Support user defined dicts
|
anijain2305
|
closed
|
[
"module: mkldnn",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going",
"ciflow/linux-aarch64"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143567
* __->__ #143548
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,749,105,988
|
[Dynamo] Rename Dict{View/Keys/Values} to Dict{View/Keys/Values}Variable
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143557
* __->__ #143547
* #143374
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,749,086,826
|
torch export programming model
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143546
Differential Revision: [D67429743](https://our.internmc.facebook.com/intern/diff/D67429743/)
| true
|
2,749,038,678
|
[hop][BE] unify meta checking with check_meta_consistency
|
ydwu4
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143457
* #143559
* #143456
* #143106
* __->__ #143545
* #143105
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,749,035,773
|
Avoid std::copy_n
|
cyyever
|
open
|
[
"module: cpu",
"triaged",
"open source",
"Stale",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,749,028,119
|
xpu: torch.accelerate.current_stream() throws C++ instead of Python exception on invalid device
|
dvrogozh
|
closed
|
[
"triaged",
"module: python frontend"
] | 1
|
CONTRIBUTOR
|
With https://github.com/pytorch/pytorch/commit/2c48af568a082c70ca4ca2cdc9b6469ed253a371 on the single card XPU system.
I am trying `torch.accelerator.current_stream()` API. It throws C++ exception when I am trying to query stream for non-existing 2nd card. Exception is expected, but Python exception not C++ one.
```
$ python -c 'import torch; print(f"device_count={torch.accelerator.device_count()}"); torch.accelerator.current_stream(1)'
device_count=1
terminate called after throwing an instance of 'c10::Error'
what(): device is out of range, device is 1, total number of device is 1.
Exception raised from check_device_index at /home/dvrogozh/git/pytorch/pytorch/c10/xpu/XPUFunctions.h:36 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xac (0x7f30707eb95c in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xf3 (0x7f307078fc57 in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10.so)
frame #2: <unknown function> + 0x19a3e (0x7f3070c2ba3e in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10_xpu.so)
frame #3: c10::xpu::getCurrentXPUStream(signed char) + 0x2f (0x7f3070c2c83f in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10_xpu.so)
frame #4: <unknown function> + 0x1ca35 (0x7f3070c2ea35 in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libc10_xpu.so)
frame #5: <unknown function> + 0x653f15 (0x7f3083391f15 in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0x39e5f2 (0x7f30830dc5f2 in /home/dvrogozh/git/pytorch/pytorch/torch/lib/libtorch_python.so)
<omitting python frames>
frame #20: <unknown function> + 0x29d90 (0x7f308b19bd90 in /lib/x86_64-linux-gnu/libc.so.6)
frame #21: __libc_start_main + 0x80 (0x7f308b19be40 in /lib/x86_64-linux-gnu/libc.so.6)
Aborted (core dumped)
```
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5
cc @albanD
| true
|
2,749,024,730
|
[Codemod][AddExplicitStrictExportArg] caffe2/torch/onnx/_internal/exporter
|
gmagogsfm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Reviewed By: avikchaudhuri
Differential Revision: D67381244
| true
|
2,749,020,149
|
torch/accelerator: fix device type comparison
|
dvrogozh
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: bug fixes",
"topic: not user facing",
"ciflow/mps",
"ciflow/rocm",
"ciflow/xpu",
"module: accelerator"
] | 15
|
CONTRIBUTOR
|
This was failing without the fix:
```
python -c 'import torch; d=torch.device("xpu:0"); torch.accelerator.current_stream(d)'
```
with:
```
ValueError: xpu doesn't match the current accelerator xpu.
```
CC: @guangyey, @EikanWang
cc @albanD @guangyey @EikanWang
| true
|
2,749,013,260
|
Fix FSDP hanging
|
phos-phophy
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Stale",
"release notes: distributed (fsdp)"
] | 4
|
NONE
|
Fixes #143536
It should be noted that I don't have enough expertise to be sure that this is the correct error correction. But it worked for me when I created this issue
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,749,012,375
|
[1/n] Support Dynamic Memory Budget in Auto AC
|
basilwong
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 18
|
CONTRIBUTOR
|
# Summary:
Full Context: https://docs.google.com/document/d/1-j5KSbfGFJQcH4sYh7BIeJXso3zYzl5G5yFQqXdKx_o/edit?usp=sharing
tl;dr
This change introduces classes which help determine a dynamic memory budget. This will mostly be helpful for models with many implicit graph breaks.
---
New Classes:
*GraphInfoProvider*
* Takes the joint_graph as well as the input memories and runtimes and parses the graph + values into usable forms for the SolverEvaluator.
*KnapsackEvaluator*
* Provides a function: Given all of the four inputs (solver function as a callable, max_dynamic_memory_budget, min_dynamic_memory_budget, dynamic_memory_budget_pareto_granularity) it returns an approximation of the knee point of the pareto distribution.
# Test Plan:
### LintRunner
LintRunner Output: P1700445547
### Unit Tests
```
$ buck test @mode/opt //caffe2/test/functorch:test_ac_knapsack
`@mode/opt` was specified, but not found. Using file at `//mode/opt`.
This behavior is being deprecated. Please use `"@//mode/opt"` instead
File changed: fbcode//caffe2/.ruff_cache/0.7.4/.tmpB6PmDS
File changed: fbsource//xplat/caffe2/test/functorch/test_ac_knapsack.py
File changed: fbcode//caffe2/.ruff_cache/0.7.4/.tmpyjCiPn
20 additional file change events
Buck UI: https://www.internalfb.com/buck2/414ead46-9ede-4192-8e1a-5d3c52bdb9cc
Test UI: https://www.internalfb.com/intern/testinfra/testrun/6473924710342830
Network: Up: 0B Down: 0B (reSessionID-159794b9-9d61-477e-8e63-9bdeaa537dca)
Analyzing targets. Remaining 0/214
Executing actions. Remaining 0/6933 0.1s exec time total
Command: test. Finished 1 local
Time elapsed: 18.5s
Tests finished: Pass 15. Fail 0. Fatal 0. Skip 0. Build failure 0
```
### Test Run
Updated the config:
```
activation_memory_budget_solver: DYNAMIC_MEMORY_BUDGET_DP
```
Confirming proper execution via: [aps-fb_fm_v4_768_01_dynamic-2a792ba8af](https://www.internalfb.com/mlhub/pipelines/runs/mast/aps-fb_fm_v4_768_01_dynamic-2a792ba8af?job_attempt=0&version=0&env=PRODUCTION)
| true
|
2,748,997,179
|
Update release matrix for 2.6
|
kit1980
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
| null | true
|
2,748,995,632
|
[CUTLASS] fix addmm
|
ColinPeppler
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
We would get a CUDA IMA before because we pass Bias in for X. So, we need to re-order the inputs.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143537
* #143528
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,748,977,958
|
FSDP learning hangs when the program tries to save the model
|
phos-phophy
|
open
|
[
"oncall: distributed"
] | 3
|
NONE
|
### 🐛 Describe the bug
## TL;DR
I have strange intermittent error which I can fix in my own (just by adding one line), but I don't know how to fix it properly in general
## What's the problem?
Recently I have tried to fine-tune some LLMs using [Accelerate](https://github.com/huggingface/accelerate) from HuggingFace. I have used FSDP distributed learning + LoRA adapter to fine-tune 2 models from Qwen series: [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) and [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). And when there are no problems with the latter model, I have got a strange intermittent error when trying to save the former after training (It may not happen right away, but it will happen for sure)
```bash
/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
warnings.warn(
[rank0]:[E1219 03:01:55.989449808 ProcessGroupNCCL.cpp:616] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=600, OpType=_ALLGATHER_BASE, NumelIn=58309504, NumelOut=233238016, Timeout(ms)=1800000) ran for 1800068 milliseconds before timing out.
[rank0]:[E1219 03:01:55.990175767 ProcessGroupNCCL.cpp:1785] [PG ID 0 PG GUID 0(default_pg) Rank 0] Exception (either an error or timeout) detected by watchdog at work: 600, last enqueued NCCL work: 600, last completed NCCL work: 599.
[rank0]:[E1219 03:01:56.095109308 ProcessGroupNCCL.cpp:1834] [PG ID 0 PG GUID 0(default_pg) Rank 0] Timeout at NCCL work: 600, last enqueued NCCL work: 600, last completed NCCL work: 599.
[rank0]:[E1219 03:01:56.095141631 ProcessGroupNCCL.cpp:630] [Rank 0] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank0]:[E1219 03:01:56.095151475 ProcessGroupNCCL.cpp:636] [Rank 0] To avoid data inconsistency, we are taking the entire process down.
[rank0]:[E1219 03:01:56.096419462 ProcessGroupNCCL.cpp:1595] [PG ID 0 PG GUID 0(default_pg) Rank 0] Process group watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=600, OpType=_ALLGATHER_BASE, NumelIn=58309504, NumelOut=233238016, Timeout(ms)=1800000) ran for 1800068 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fc856081446 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x7fc857394772 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7fc85739bbb3 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7fc85739d61d in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x145c0 (0x7fc89fd2a5c0 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: <unknown function> + 0x8609 (0x7fc8a24df609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #6: clone + 0x43 (0x7fc8a2619133 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 0 PG GUID 0(default_pg) Rank 0] Process group watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=600, OpType=_ALLGATHER_BASE, NumelIn=58309504, NumelOut=233238016, Timeout(ms)=1800000) ran for 1800068 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fc856081446 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x7fc857394772 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7fc85739bbb3 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7fc85739d61d in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x145c0 (0x7fc89fd2a5c0 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: <unknown function> + 0x8609 (0x7fc8a24df609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #6: clone + 0x43 (0x7fc8a2619133 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fc856081446 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0xe4271b (0x7fc85700a71b in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: <unknown function> + 0x145c0 (0x7fc89fd2a5c0 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x8609 (0x7fc8a24df609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #4: clone + 0x43 (0x7fc8a2619133 in /lib/x86_64-linux-gnu/libc.so.6)
E1219 03:01:57.498000 786687 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: -6) local_rank: 0 (pid: 786758) of binary: /mnt/data/a.kudisov/transformers/.venv/bin/python
Traceback (most recent call last):
File "/mnt/data/a.kudisov/transformers/.venv/bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/accelerate/commands/launch.py", line 1155, in launch_command
multi_gpu_launcher(args)
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/accelerate/commands/launch.py", line 793, in multi_gpu_launcher
distrib_run.run(args)
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
train.py FAILED
-------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-12-19_03:01:57
host : ...
rank : 0 (local_rank: 0)
exitcode : -6 (pid: 786758)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 786758
=======================================================
```
This error occurs when the program tries to save the model and hangs while collecting model.state_dict()
I did a little investigation and found out that the main process (it is distributed learning with 4 processes) successfully collects all model's layers on cpu, except the last one. When it starts processing the last layer, the whole process hangs and crashes on timeout.
If I change model from Qwen2.5-7B-Instruct to Qwen2.5-1.5B-Instruct (from a big one to a small one) this error will disappear (there is still one more [problem](https://github.com/huggingface/transformers/pull/35234) that will not allow you to save the model after training but it's related to transformers from HuggingFace, not pytorch)
I believe this error has something to do with communication between processes.
## How to reproduce?
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch --config_file=config.yaml --main_process_port=12355 train.py --output_dir=./save
```
Accelerate config:
```yaml
# accelerate.yaml
compute_environment: LOCAL_MACHINE
debug: true
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: NO_PREFETCH
fsdp_forward_prefetch: false
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
Training script:
```python
# train.py
import argparse
from functools import partial
import torch
from datasets import Dataset
from peft import LoraConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, Trainer, TrainingArguments
def get_data(tokenizer):
data = [{
'user_message': "Hi, how are you?",
'model_message': "I'm good, thanks. How about you?"
}] * 20
data = Dataset.from_list(data)
data = data.train_test_split(train_size=0.7, shuffle=True, seed=42)
tmp = data['test'].train_test_split(test_size=0.6, shuffle=True, seed=143)
data['validation'] = tmp['train']
data['test'] = tmp['test']
def tokenize(x):
messages = [
{'role': 'user', "content": x['user_message']},
{'role': 'assistant', "content": x['model_message']},
]
text = tokenizer.decode(tokenizer.apply_chat_template(messages))
result = tokenizer(text, return_tensors='pt')
sep = '<|im_start|>assistant\n'
input_text = text.split(sep)[0] + sep
input_len = len(tokenizer(input_text)['input_ids'])
result['labels'] = result['input_ids'].clone().detach()
result['labels'][:, :input_len] = -100
return {k: v.tolist()[0] for k, v in result.items()}
tokenized_datasets = data.map(
tokenize,
remove_columns=['user_message', 'model_message'],
)
tokenized_datasets.set_format('torch')
return tokenized_datasets
def collate_fn(data, pad_token_id):
input_ids, labels = tuple([x[key] for x in data] for key in ('input_ids', 'labels'))
input_ids = torch.nn.utils.rnn.pad_sequence(input_ids, batch_first=True, padding_value=pad_token_id)
labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=-100)
return {
'input_ids': input_ids,
'labels': labels,
'attention_mask': input_ids.ne(pad_token_id) * 1
}
def print_trainable_parameters(model):
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(f'trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}')
def training_function(args):
model_name = 'Qwen/Qwen2.5-7B-Instruct'
training_args = TrainingArguments(
output_dir=args.output_dir,
gradient_checkpointing=True,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=1,
save_strategy='no',
seed=42,
data_seed=42,
optim='adamw_8bit'
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
data = get_data(tokenizer)
model = AutoModelForCausalLM.from_pretrained(
model_name,
return_dict=True,
)
model.add_adapter(LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.1,
target_modules=['q_proj', 'k_proj']
))
trainer = Trainer(
model=model,
args=training_args,
train_dataset=data['train'],
eval_dataset=data['validation'],
data_collator=partial(collate_fn, pad_token_id=tokenizer.pad_token_id),
)
if trainer.accelerator.is_main_process:
print_trainable_parameters(model)
trainer.train()
if trainer.is_fsdp_enabled:
trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
trainer.save_model()
def main():
parser = argparse.ArgumentParser(description='Main training script.')
parser.add_argument(
'--output_dir',
type=str,
default='.',
help='Optional save directory where all checkpoint folders will be stored. Default is the current working directory.'
)
args = parser.parse_args()
training_function(args)
if __name__ == '__main__':
main()
```
Environment:
```bash
accelerate==1.1.1
torch==2.5.1+cu124
pandas==2.2.3
peft==0.13.2
datasets==3.1.0
transformers==4.46.3
tqdm==4.67.1
```
## How to fix?
I found that just adding `dist.barrier()` in `_pre_state_dict_hook` function (from torch/distributed/fsdp/_state_dict_utils.py) helps me overcome the problem. But I don't have enough expertise to be sure that this is the correct error correction.
```python
# torch/distributed/fsdp/_state_dict_utils.py
@no_type_check
@torch.no_grad()
def _pre_state_dict_hook(
module: nn.Module,
*args,
**kwargs,
) -> None:
"""
This is called before the core state dict saving logic of ``module``.
``fsdp_state._state_dict_type`` is used to decide what postprocessing will
be done.
"""
fsdp_state = _get_module_fsdp_state_if_fully_sharded_module(module)
if fsdp_state.sharding_strategy == ShardingStrategy.NO_SHARD:
context = _replace_with_full_state_dict_type(fsdp_state)
warnings.warn(
"When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict will"
"be returned."
)
else:
_set_use_dtensor(fsdp_state)
context = contextlib.nullcontext()
with context:
_pre_state_dict_hook_fn = {
StateDictType.FULL_STATE_DICT: _full_pre_state_dict_hook,
StateDictType.LOCAL_STATE_DICT: _local_pre_state_dict_hook,
StateDictType.SHARDED_STATE_DICT: _sharded_pre_state_dict_hook,
}
##############################################
dist.barrier() # I add this
##############################################
_pre_state_dict_hook_fn[fsdp_state._state_dict_type](
fsdp_state,
module,
*args,
**kwargs,
)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-165-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 2527.355
CPU max MHz: 2001.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 2.6 MiB
L1i cache: 1.8 MiB
L2 cache: 70 MiB
L3 cache: 84 MiB
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn
| true
|
2,748,970,915
|
test/dynamo/test_utils: logging - Stop testing for impossible things.
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143317
* #143307
* __->__ #143535
We don't support assigning to objects or numeric constants at the top level in
config modules, no need to test for them.
(This specifically breaks later sorting refactoring, since it requires <
to be implemented).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,748,966,688
|
[FlexAttention] fix various block-mask edge cases
|
drisspg
|
closed
|
[
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143534
* #143525
Today when calling `_adjust` be adjust both the partial blocks and full_kv_blocks If present. The problem is that if you adjust to a new num_cols and lands in an existing full_block, that full_block becomes a partial block, but since we dont have any partial block indices we aren't able to update correctly:
Example:
```Shell
-------------------------------Correct block mask-------------------------------
BlockMask(shape=(1, 1, 101, 129), sparsity=-100.00%,
(0, 0)
████
)
kv_num_blocks tensor([[[2]]], device='cuda:0', dtype=torch.int32)
full_kv_num_blocks tensor([[[0]]], device='cuda:0', dtype=torch.int32)
block_indices tensor([[[[0, 1]]]], device='cuda:0', dtype=torch.int32)
full block_indices tensor([[[[0, 1]]]], device='cuda:0', dtype=torch.int32)
----------------------------------Starts FUll-----------------------------------
BlockMask(shape=(1, 1, 256, 257), sparsity=-24.51%,
(0, 0)
██████
████
)
kv_num_blocks tensor([[[2, 2]]], device='cuda:0', dtype=torch.int32)
full_kv_num_blocks tensor([[[1, 0]]], device='cuda:0', dtype=torch.int32)
block_indices tensor([[[[0, 2, 1],
[1, 2, 0]]]], device='cuda:0', dtype=torch.int32)
full block_indices tensor([[[[1, 0, 2],
[0, 1, 2]]]], device='cuda:0', dtype=torch.int32)
----------------------------Adjusted to final shape-----------------------------
BlockMask(shape=(1, 1, 101, 129), sparsity=-100.00%,
(0, 0)
████
)
kv_num_blocks tensor([[[1]]], device='cuda:0', dtype=torch.int32)
full_kv_num_blocks tensor([[[1]]], device='cuda:0', dtype=torch.int32)
block_indices tensor([[[[0, 2, 1]]]], device='cuda:0', dtype=torch.int32)
full block_indices tensor([[[[1, 0, 2]]]], device='cuda:0', dtype=torch.int32)
```
```Python
import torch
from torch.nn.attention.flex_attention import create_block_mask, flex_attention
def mask_mod(b, h, q, kv):
return q < kv
bm = create_block_mask(mask_mod, None, None, 101, 101)
print("Correct block mask".center(80, "-"))
print(bm)
print(f"kv_num_blocks {bm.kv_num_blocks}")
print(f"full_kv_num_blocks {bm.full_kv_num_blocks}")
print(f"block_indices {bm.kv_indices}")
print(f"full block_indices {bm.full_kv_indices}")
print("Starts FUll".center(80, "-"))
bm = create_block_mask(mask_mod, None, None, 256, 257)
print(bm)
print(f"kv_num_blocks {bm.kv_num_blocks}")
print(f"full_kv_num_blocks {bm.full_kv_num_blocks}")
print(f"block_indices {bm.kv_indices}")
print(f"full block_indices {bm.full_kv_indices}")
print("Adjusted to final shape".center(80, "-"))
bm = bm._adjust(101, 129)
print(bm)
print(f"kv_num_blocks {bm.kv_num_blocks}")
print(f"full_kv_num_blocks {bm.full_kv_num_blocks}")
print(f"block_indices {bm.kv_indices}")
print(f"full block_indices {bm.full_kv_indices}")
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng
| true
|
2,748,940,847
|
[TEST ONLY] Repro funcol all_to_all_single called from custom autograd function
|
yf225
|
closed
|
[
"oncall: distributed",
"Stale",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
`TORCH_LOGS="+dynamo" pytest -rA test/distributed/test_inductor_collectives.py::TestCollectivesMultiProc::test_all_to_all_single_custom_autograd_function`
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143533
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,748,915,183
|
[DO NOT MERGE] debugging NoneAsConstantBuffer
|
mlazos
|
closed
|
[
"ciflow/trunk",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,748,910,174
|
[Hierarchical Compile] Update NoneAsConstantBuffer to support graph d…
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Fixes issues I hit while running graph deduplication with torch tune.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,748,814,366
|
Skip test_conv2d_linear_add_broadcast_shapes_cpu on fbcode
|
huydhn
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"test-config/default",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: The test is added by D67376995 and it is failing on fbcode
Test Plan: `buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:mkldnn_pattern_matcher_cpu -- --exact 'caffe2/test/inductor:mkldnn_pattern_matcher_cpu - test_conv2d_linear_add_broadcast_shapes_cpu (caffe2.test.inductor.test_mkldnn_pattern_matcher.TestPatternMatcher)'`
Differential Revision: D67413687
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,748,801,491
|
feature_use: Remove JK from naming for feature use.
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143529
See discussion in https://github.com/pytorch/pytorch/pull/142819 but
TL;DR, since we're loging use but not direct JK reads, it's less
confusing to use the logging
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,748,778,345
|
[CUTLASS] fix bugs: extra data_ptr() call, wrong size symbol name, bias symbol not added
|
ColinPeppler
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
A few small things in this PR:
- fixed a bug where `workspace.data_ptr().data_ptr()` showed up
- for SM80 CUTLASS kernels, the symbol size for W.size(1) was never created
- for addmm kernels, the ldc bias symbol never showed up
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143537
* __->__ #143528
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,748,759,861
|
BF16 act, int8 WoQ GEMM with FP16 compute & accum from IPEX for m dim <= 4
|
sanchitintel
|
closed
|
[
"module: cpu",
"open source",
"Stale",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Intel Extension for PyTorch computes GEMM for int8 WoQ case with BF16 activation when `M` dimension <=4 by converting both the activation & weights to FP16, and fusing application of scale. Then FP16 FMA instructions are used in the GEMM micro-kernel, and accumulation is also in FP16.
The current approach in PyTorch is using FP32 compute & FP32 accum for this case. FP16 compute & accum is faster but has poorer numerical accuracy.
The motivation is to speed up next-token generation of LLMs
TODO: Explore why micro-kernels based on BF16 dot product instruction `_mm512_dpbf16_ps` that use FP32 accum don't work as well.
cc @jgong5 @mingfeima @XiaobingSuper @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,748,749,793
|
Unbacked SymInt fixes for subclasses + data-dependent slice() bounds (non-dynamic)
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143311
* #142063
* #142062
* __->__ #143526
Lifted non-controversial (non-dynamic) fixes from #142062. See description there for context.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,748,721,674
|
[FlexAttention] fix block-mask slicing with new seq-length arg
|
drisspg
|
closed
|
[
"Stale",
"release notes: nn",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143534
* __->__ #143525
Fixes: https://github.com/pytorch/pytorch/issues/143260
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng
| true
|
2,748,711,741
|
trace.save_real_tensors segfaults on resnet
|
exclamaforte
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Reproduction:
```
import torch
import torchvision.models as models
device = torch.device("cuda")
input_tensor = torch.randn(1, 3, 224, 224, dtype=torch.float32).to(device)
comp = torch.compile(options={"trace.enabled": True, "trace.save_real_tensors": True})(models.resnet50(pretrained=True).to(device))
print(comp(input_tensor))
```
### Error logs
```
/home/gabeferns/.conda/envs/fuzzer/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be remov
ed in the future, please use 'weights' instead.
warnings.warn(
/home/gabeferns/.conda/envs/fuzzer/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are depre
cated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet50_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet50_Weigh
ts.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
/home/gabeferns/pt-envs/fuzzer/torch/_inductor/compile_fx.py:194: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider set
ting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
Aborted (core dumped)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+git40fcd30
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.34
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.5.1
/usr/lib64/libcudnn_adv.so.9.5.1
/usr/lib64/libcudnn_cnn.so.9.5.1
/usr/lib64/libcudnn_engines_precompiled.so.9.5.1
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib64/libcudnn_graph.so.9.5.1
/usr/lib64/libcudnn_heuristic.so.9.5.1
/usr/lib64/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 46
On-line CPU(s) list: 0-45
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 46
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.9 MiB (46 instances)
L1i cache: 2.9 MiB (46 instances)
L2 cache: 23 MiB (46 instances)
L3 cache: 736 MiB (46 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-45
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0a0+git40fcd30
[pip3] torchvision==0.20.0.dev20241211+cu126
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0a0+git40fcd30 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241211+cu126 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,748,696,442
|
[Dynamo] topologically sort duplicated graph regions
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 8
|
CONTRIBUTOR
|
Ensure regions are topologically sorted
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,748,635,466
|
[Dynamo] Flatten slices during graph deduplication
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
I encountered this issue while debugging torchtune - overall we need to make sure to not miss nodes that are slice arguments.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,748,626,186
|
[MPS] Add regression test for sliced matmul
|
hvaara
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
This issue seems to have been fixed in macOS already. This PR only adds a regression test. I'm expecting failure on test runners with macOS 13 and 14.
Fixes #104832
| true
|
2,748,620,176
|
[MPS] Use metal shaders for all view ops
|
pytorchbot
|
closed
|
[
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Before this PR Metal shaders were used to scatter/gather 1-5 dimensional tensors.
This PR introduces generalized ones that could be used for any dimensionality and as results gets rid of 700+ lines complex and untested code that might not even work as expected.
Generalized gather shader looks as follows
```metal
kernel void gather_kernel_n(uint linear_index [[thread_position_in_grid]],
constant void * src_ [[buffer(0)]],
device void * dst_ [[buffer(1)]],
constant uint32_t * size [[buffer(2)]],
constant uint32_t * stride [[buffer(3)]],
constant uint32_t & numel [[buffer(4)]],
constant int32_t & ndim [[buffer(5)]]) {{
if (linear_index >= numel) return;
constant {0} * src = (constant {0} *)src_;
device {1} * dst = (device {1} *)dst_;
uint64_t src_offs = 0;
auto src_idx = linear_index;
for(int dim = ndim - 1; dim >= 0; --dim) {{
src_offs += stride[dim] * (src_idx % size[dim]);
src_idx /= size[dim];
}}
dst[linear_index] = cast<{1}>(src[src_offs]);
}}
```
Which, according to the following benchmark
```python
from timeit import default_timer
import torch
import torch.utils.cpp_extension
from torch.utils.benchmark import Measurement, Timer
t = Timer(
stmt=f"y.copy_(x);torch.mps.synchronize()",
setup=f"x=torch.rand(4, 5, 16, 64, 33, 24, dtype=torch.float32, device='mps')[:,:,:,:24,:24,];y=torch.empty(x.shape, device=x.device, dtype=x.dtype)",
language="python", timer=default_timer
)
print(t.blocked_autorange())
```
Is almost twice as fast as previous implementation (i.e. on Mac Book M2 Pro it returns 2.9ms for MPS version vs 1.5ms for shader one
On MacOS Sequoia [`gatherWithUpdatesTensor: indicesTensor:...`](https://developer.apple.com/documentation/metalperformanceshadersgraph/mpsgraph/gather(withupdatestensor:indicestensor:axis:batchdimensions:name:)?language=objc) crashes if invoked with complex data type, as one can see by running the code below
```swift
import Metal
import MetalPerformanceShadersGraph
func gatherComplexMPS(device: MTLDevice,
inp_buf: MTLBuffer, idx_buf: MTLBuffer,
out_buf: MTLBuffer,
inp_elem: Int, upd_elem: Int) {
let graph = MPSGraph()
let inputPlaceholder = graph.placeholder(shape: [inp_elem as NSNumber], dataType: .complexFloat32, name: nil)
let indicesPlaceholder = graph.placeholder(shape: [upd_elem as NSNumber], dataType: .int64, name: nil)
let outNode = graph.gather(withUpdatesTensor: inputPlaceholder, indicesTensor: indicesPlaceholder, axis: 0, batchDimensions: 0, name: nil)
let mpsInputBuffer = MPSGraphTensorData(inp_buf, shape: [inp_elem as NSNumber], dataType: .complexFloat32)
let mpsIndicesBuffer = MPSGraphTensorData(idx_buf, shape: [upd_elem as NSNumber], dataType: .int64)
let mpsOutputBuffer = MPSGraphTensorData(out_buf, shape: [inp_elem as NSNumber], dataType: .complexFloat32)
guard let queue = device.makeCommandQueue() else { fatalError("Can't make queue") }
graph.run(with: queue, feeds: [inputPlaceholder: mpsInputBuffer,
indicesPlaceholder: mpsIndicesBuffer ],
targetOperations: nil, resultsDictionary: [outNode: mpsOutputBuffer])
}
func makeBufferWithValues<T>(device: MTLDevice, values: [T]) -> MTLBuffer {
guard let buf = device.makeBuffer(length: values.count * MemoryLayout<T>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") }
let buf_data = buf.contents().assumingMemoryBound(to: T.self)
for i in 0..<values.count {
buf_data[i] = values[i]
}
return buf
}
guard let device = MTLCopyAllDevices().first else { fatalError("Not Metal device found") }
print("Using device \(device.name)")
let inp_buf = makeBufferWithValues(device: device, values: [1.0, 2.0 , 3.0, 4.0, 5.0, 6.0, 7.0, 8.0])
let idx_buf = makeBufferWithValues(device: device, values: [0, 1, 2, 3])
guard let out_buf = device.makeBuffer(length:8 * MemoryLayout<Float>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") }
gatherComplexMPS(device: device, inp_buf: inp_buf, idx_buf: idx_buf, out_buf: out_buf, inp_elem: 4, upd_elem: 4)
```
Fixes https://github.com/pytorch/pytorch/issues/143140
| true
|
2,748,619,531
|
[user triton] Raise an exception when encountering nested @triton.autotune decorators or @triton.heuristics
|
SamGinzburg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143519
We support running a single Autotuner for each Triton kernel. Currently,
if there are multiple autotuning decorators, the subsequent ones will be
silently ignored.
Instead, we should raise an error here to avoid silent incorrectness.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,748,607,304
|
[DRAFT] Implement getattr access for subclasses in pre-dispatch IR
|
tugsbayasgalan
|
closed
|
[
"fx",
"ciflow/inductor",
"release notes: export"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143518
* #141941
This is a draft PR that tries to prototype how to capture attribute access in pre-dispatch IR. The motivating use case is: https://github.com/pytorch/ao/blob/039cef4ad546716aa04cd54c461feb173f7fe403/tutorials/developer_api_guide/export_to_executorch.py#L54 where TorchAO overrides Embedding.weight with a tensor subclass and then do attribute access inside it. We have to solve this problem in both strict and non-strict because even when dynamo translates `subclass.inner_tensor` into a fx Graph, the underlying tracer that converts torch IR to aten IR will need to handle `subclass.inner_tensor` as well.
The basic implementation idea here is that: "We will override __getattr__ of tensors so that we can inject torch_function handler to intercept it in make_fx". But the complications here are:
1. We don't want to add override to __getattr__ in tensor because it will significantly slow down the eager performance. Some synthetic benchmarking we did showed around 15% slow down (the benchmark is bit skewed because the example we tried had lot of attr accesses)
2. We can only intercept __getattr__ for input tensors (user inputs + param/buffers) because intercepting everything will require we do global patching.
So we decided to monkey patch inner tensors as properties in export. Then, the main difficulty here is to correctly clean up the monkey patched attr accesses. Roughly we are doing something like this:
```
class Foo():
bar = 2
foo = Foo()
print(foo.bar)
def patch(obj):
obj._real_foo = obj.bar
def my_getter(self):
print("handle things here ")
return self._real_foo
def my_setter(self, new_val):
print("Handle setter here")
self._real_foo = new_val
type(obj).bar = property(my_getter, my_setter)
patch(foo)
print(foo.bar)
foo.bar = 3
print(foo.bar)
```
Other approaches we considered but didn't pursue:
1. Ask subclass authors to explicitly mark inner tensors as properties which sounds bit lame
2. If user is interested in inference IR only, don't materialize pre-dispatch IR which causes some complicated story for export side.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,748,599,854
|
[codemod] Fix a few unused-variable issues in pytorch
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"topic: improvements",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary:
LLVM-15 has a warning `-Wunused-variable` which we treat as an error because it's so often diagnostic of a code issue. Unused variables can compromise readability or, worse, performance.
This diff either (a) removes an unused variable and, possibly, it's associated code or (b) qualifies the variable with `[[maybe_unused]]`.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
Reviewed By: palmje
| true
|
2,748,596,852
|
[BE] Get rid of `malfet/checkout@silent-checkout`
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Instead use `actions/checkout@v4` with `show-progress: false`. It's more verbose than the quiet option, but our logs are long anyway...
Partially addresses https://github.com/pytorch/pytorch/issues/143079
| true
|
2,748,570,407
|
[cutlass-3] Update third-party/cutlass-3 from 3.4 to 3.5.1
|
drisspg
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"release notes: sparse",
"module: inductor",
"ciflow/inductor"
] | 12
|
CONTRIBUTOR
|
# Summary:
This also makes updates to different repositories throughout FB code to roll any updates needed for this new release.
I was not able to get AsyncMM.cu to build (still trying) Yfiu suggested that I just skip it for now
Test Plan:
Have run various build commands to try and expose errors
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.