id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,745,818,362
|
[Inductor test failure] torch:inductor/test_select_algorithm TestSelectAlgorithm.test_convolution1 with cuda 12.6.3
|
qlzh727
|
open
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"module: inductor"
] | 22
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This is Qianli from Google. Our GPU team is testing the release of cuda 12.6.3 and noticed a test failure from inductor in our internal infra. I don't have a dev box setup to fully mirror this issue in a OSS env, but please see the error log and see if anything in the recent cuda change could cause this error.
The current internal cuda version used by us is 12.0, and from https://pytorch.org/, it seems that pytorch support 12.4. This potentially indicate that the behavior change happens between 12.4 and 12.6.3.
In the passing run, the info log looks like below, somehow the pass of triton_convolution2d_48 failed.
```
[test/inductor/test_select_algorithm](http://test/inductor/test_select_algorithm).py::TestSelectAlgorithm::test_convolution1 AUTOTUNE convolution(2x33x34x41, 34x33x3x3)
convolution 0.0436 ms 100.0%
triton_convolution2d_46 0.0592 ms 73.7% ALLOW_TF32=False, BLOCK_K=16, BLOCK_M=64, BLOCK_N=64, GROUPS=1, KERNEL_H=3, KERNEL_W=3, PADDING_H=4, PADDING_W=5, STRIDE_H=2, STRIDE_W=3, UNROLL=False, num_stages=2, num_warps=4
triton_convolution2d_50 0.0749 ms 58.2% ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=64, GROUPS=1, KERNEL_H=3, KERNEL_W=3, PADDING_H=4, PADDING_W=5, STRIDE_H=2, STRIDE_W=3, UNROLL=False, num_stages=2, num_warps=4
triton_convolution2d_51 0.1000 ms 43.6% ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=64, BLOCK_N=64, GROUPS=1, KERNEL_H=3, KERNEL_W=3, PADDING_H=4, PADDING_W=5, STRIDE_H=2, STRIDE_W=3, UNROLL=False, num_stages=2, num_warps=8
triton_convolution2d_49 0.1172 ms 37.2% ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=128, BLOCK_N=64, GROUPS=1, KERNEL_H=3, KERNEL_W=3, PADDING_H=4, PADDING_W=5, STRIDE_H=2, STRIDE_W=3, UNROLL=False, num_stages=2, num_warps=8
triton_convolution2d_52 0.1977 ms 22.1% ALLOW_TF32=False, BLOCK_K=32, BLOCK_M=256, BLOCK_N=64, GROUPS=1, KERNEL_H=3, KERNEL_W=3, PADDING_H=4, PADDING_W=5, STRIDE_H=2, STRIDE_W=3, UNROLL=False, num_stages=2, num_warps=8
triton_convolution2d_47 0.2401 ms 18.2% ALLOW_TF32=False, BLOCK_K=16, BLOCK_M=256, BLOCK_N=64, GROUPS=1, KERNEL_H=3, KERNEL_W=3, PADDING_H=4, PADDING_W=5, STRIDE_H=2, STRIDE_W=3, UNROLL=False, num_stages=2, num_warps=4
triton_convolution2d_48 0.2537 ms 17.2% ALLOW_TF32=False, BLOCK_K=16, BLOCK_M=1024, BLOCK_N=16, GROUPS=1, KERNEL_H=3, KERNEL_W=3, PADDING_H=4, PADDING_W=5, STRIDE_H=2, STRIDE_W=3, UNROLL=False, num_stages=1, num_warps=8
frames [('total', 1), ('ok', 1)]
stats [('calls_captured', 2), ('unique_graphs', 1)]
aot_autograd [('total', 1), ('ok', 1)]
```
### Error logs
Traceback (most recent call last):
File "torch/test/inductor/test_select_algorithm.py", line 264, in test_convolution1
foo(
File "torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 1404, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 1188, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 549, in __call__
return _compile(
^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 985, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 712, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 747, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 233, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/convert_frame.py", line 664, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2841, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 1032, in run
while self.step():
^^^^^^^^^^^
File "torch/_dynamo/symbolic_convert.py", line 944, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 3021, in RETURN_VALUE
self._return(inst)
File "torch/_dynamo/symbolic_convert.py", line 3006, in _return
self.output.compile_subgraph(
File "torch/_dynamo/output_graph.py", line 1077, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "torch/_dynamo/output_graph.py", line 1349, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/output_graph.py", line 1399, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/output_graph.py", line 1448, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "torch/_dynamo/output_graph.py", line 1429, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/__init__.py", line 2304, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/compile_fx.py", line 1733, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_functorch/aot_autograd.py", line 1103, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "torch/_functorch/aot_autograd.py", line 1079, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_functorch/aot_autograd.py", line 527, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_functorch/aot_autograd.py", line 778, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 197, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/compile_fx.py", line 1546, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/compile_fx.py", line 1615, in _fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "torch/_inductor/compile_fx.py", line 599, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/compile_fx.py", line 756, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
^^^^^^^^^^^^^^^^^^
File "torch/_inductor/codecache.py", line 1577, in load
compiled_graph = compile_fx_fn(
^^^^^^^^^^^^^^
File "torch/_inductor/compile_fx.py", line 663, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/compile_fx.py", line 974, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/graph.py", line 2019, in compile_to_fn
return self._compile_to_fn()
^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/graph.py", line 2051, in _compile_to_fn
return self.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/graph.py", line 1957, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/graph.py", line 1963, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
^^^^^^^^^^^^^^
File "torch/_inductor/graph.py", line 1894, in codegen
self.scheduler = Scheduler(self.operations)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/scheduler.py", line 1763, in __init__
self._init(nodes)
File "torch/_inductor/scheduler.py", line 1833, in _init
self.nodes = self.fuse_nodes(self.nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/scheduler.py", line 2326, in fuse_nodes
nodes = self.fuse_nodes_once(nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/scheduler.py", line 2622, in fuse_nodes_once
if not self.speedup_by_fusion(node1, node2):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/scheduler.py", line 2519, in speedup_by_fusion
choice_timings = multi_node.choice_timings
^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/ir.py", line 4252, in choice_timings
self._choice_timings = self._choice_timings_fn()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/select_algorithm.py", line 1468, in get_timings
timings = do_autotuning(precompile_fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/select_algorithm.py", line 1433, in do_autotuning
timings = self.lookup(
^^^^^^^^^^^^
File "torch/test/inductor/test_select_algorithm.py", line 26, in skip_cache
return benchmark(choices)
^^^^^^^^^^^^^^^^^^
File "torch/_inductor/select_algorithm.py", line 1418, in autotune
return make_benchmark_fn()(choices)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/_inductor/select_algorithm.py", line 1611, in benchmark_in_current_process
raise AssertionError( # noqa: B904
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: Incorrect result from choice TritonTemplateCaller(/tmp/tmpojnhspqd/lu/cluvzbjsxripzemg5ihr3rxsnjpiin5vvwg3ejwznf7th2yegd42.py, ALLOW_TF32=False, BLOCK_K=16, BLOCK_M=1024, BLOCK_N=16, GROUPS=1, KERNEL_H=3, KERNEL_W=3, PADDING_H=4, PADDING_W=5, STRIDE_H=2, STRIDE_W=3, UNROLL=False, num_stages=1, num_warps=8)
Tensor-likes are not close!
Mismatched elements: 19584 / 23120 (84.7%)
Greatest absolute difference: 132.32015991210938 at index (0, 22, 4, 13) (up to 0.0001 allowed)
Greatest relative difference: inf at index (0, 0, 1, 0) (up to 0.0001 allowed)
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 python test/inductor/test_select_algorithm.py TestSelectAlgorithm.test_convolution1
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
### Versions
Torch version:
commit hash f4ce9ac29d524fae51b6d8e300c4cab016fc8f18
closest_version: "ciflow/periodic/0b13bdd877f7b612ab2990e327ab3b40242945bf"
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,745,817,101
|
Support tensor subclass unwrapping (#141941)
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Stale",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Summary:
imported-using-ghimport
Test Plan: Imported from OSS
Reviewed By: bdhirsh
Differential Revision: D66690419
Pulled By: tugsbayasgalan
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,745,766,046
|
Triton bump for 3.2 cherry-picks (device context)
|
bertmaher
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143409
Summary:
* https://github.com/triton-lang/triton/pull/3731
| true
|
2,745,762,469
|
Parallelize epilogue/prologue benchmarking
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145104
* #139102
* #145103
* __->__ #143408
When we attempt prologue or epilogue fusion with a TritonTemplate, we benchmark it at compile time in order to determine profitability. This avoids slowdowns/register spilling, and allows us to pick fusion when a base triton template is slower than cublas but faster when considering an epilogue. However, that fused benchmarking does not do the same async compilation as we do for the base TritonTemplate. The Base TritonTemplate is async compiled during lowering, then later waited on and benchmarked.
This PR extends a similar process to benchmarking fused TritonTemplates in the scheduler. We keep a list of pending fusions which have async compilations. And we resolve any pending fusions a node is in prior to attempting to fuse it with any other node.
Initially, I saw some slowdowns with this because we kick off async compilations of identical fusions in parallel. To address this I added source code caching at the `async_compile` level (we also already cache benchmark runs, but that would not happen in parallel).
Compilation speedups:
<img width="717" alt="image" src="https://github.com/user-attachments/assets/8e8f7d6c-7824-4210-83f9-a2a0f6db5ac9" />
This also should let us be a bit more aggressive with either configs, or benchmarking other fusions which are hard to determine profitability of.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,745,749,847
|
Fix unused variables in test/test_transformers.py
|
rec
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143407
| true
|
2,745,745,033
|
torch.compile - Simplify error when triton package is not found
|
atalman
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Initial title: titon: python 3.13 torch.compile smoke test is failing in CI/CD
I observe following failure running smoke_test for python 3.13, cuda 11.8, cuda 12.4, cuda 12.6: https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/smoke_test/smoke_test.py
Detected during validation testing. Please see: https://github.com/pytorch/test-infra/issues/6077#issue-2745142173
Worklfow: https://github.com/pytorch/pytorch/actions/runs/12378525708/job/34555282656?pr=143397#step:15:1434
Full log:
```
++ python3 ./smoke_test/smoke_test.py --package torchonly
torch: 2.6.0+cu118
ATen/Parallel:
at::get_num_threads() : 8
at::get_num_interop_threads() : 16
OpenMP 201511 (a.k.a. OpenMP 4.5)
omp_get_max_threads() : 8
Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 8
Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
std::thread::hardware_concurrency() : 16
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
Testing smoke_test_conv2d
Testing smoke_test_conv2d with cuda
/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py:232: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.cuda.amp.autocast():
Testing smoke_test_conv2d with cuda for torch.float16
Testing smoke_test_conv2d with cuda for torch.float32
Testing smoke_test_conv2d with cuda for torch.float64
Testing smoke_test_linalg on cpu
Testing smoke_test_linalg on cuda
Testing smoke_test_linalg with cuda for torch.float32
Testing smoke_test_linalg with cuda for torch.float64
Testing smoke_test_compile for cuda and torch.float16
Traceback (most recent call last):
File "/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 385, in <module>
main()
~~~~^^
File "/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 379, in main
smoke_test_cuda(
~~~~~~~~~~~~~~~^
options.package, options.runtime_error_check, options.torch_compile_check
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 186, in smoke_test_cuda
smoke_test_compile("cuda" if torch.cuda.is_available() else "cpu")
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 286, in smoke_test_compile
x_pt2 = torch.compile(foo)(x)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/conda-env-12[3750](https://github.com/pytorch/test-infra/actions/runs/12375056109/job/34539025723#step:14:3751)56109/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self,
^^^^^
...<2 lines>...
),
^^
)
^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
tx, list(reversed(stack_values)), root, output_replacements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
e.__traceback__
) from None
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
~~~~~~~~~~~~~
...<6 lines>...
cudagraphs=cudagraphs,
~~~~~~~~~~~~~~~~~~~~~~
)(model_, example_inputs_)
~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
functional_call,
^^^^^^^^^^^^^^^^
...<3 lines>...
shape_env,
^^^^^^^^^^
)
^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
flat_fn, fake_flat_args, aot_config, fake_mode, shape_env
)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
~~~~~~~~~~~^
flat_fn,
^^^^^^^^
...<2 lines>...
fw_metadata=fw_metadata,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 203, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base
return inner_compile(
gm,
...<5 lines>...
boxed_forward_device_index=forward_device,
)
Traceback (most recent call last):
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
File "/home/ec2-user/actions-runner/_work/test-infra/test-infra/test-infra/.github/scripts/run_with_env_secrets.py", line 102, in <module>
main()
File "/home/ec2-user/actions-runner/_work/test-infra/test-infra/test-infra/.github/scripts/run_with_env_secrets.py", line 98, in main
run_cmd_or_die(f"docker exec -t {container_name} /exec")
File "/home/ec2-user/actions-runner/_work/test-infra/test-infra/test-infra/.github/scripts/run_with_env_secrets.py", line 39, in run_cmd_or_die
raise RuntimeError(f"Command {cmd} failed with exit code {exit_code}")
RuntimeError: Command docker exec -t 9918c06ce07754c73f8d05922651f9952bdb247939406ec9e910b63df29bad02 /exec failed with exit code 1
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
gm,
^^^
example_inputs,
^^^^^^^^^^^^^^^
**kwargs,
^^^^^^^^^
)
^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 685, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
gm, example_inputs, inputs_to_check, **graph_kwargs
)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/compile_fx.py", line 1044, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2027, in compile_to_module
return self._compile_to_module()
~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/graph.py", line 2033, in _compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
~~~~~~~~~~~~^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/graph.py", line 1964, in codegen
self.scheduler = Scheduler(self.operations)
~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/scheduler.py", line 1798, in __init__
self._init(nodes)
~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/scheduler.py", line 1816, in _init
self.nodes = [self.create_scheduler_node(n) for n in nodes]
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/scheduler.py", line 1947, in create_scheduler_node
return SchedulerNode(self, node)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/scheduler.py", line 893, in __init__
self._compute_attrs()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/scheduler.py", line 907, in _compute_attrs
group_fn = self.scheduler.get_backend(device).group_fn
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/scheduler.py", line 3441, in get_backend
self.backends[device] = self.create_backend(device)
~~~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_inductor/scheduler.py", line 3432, in create_backend
raise RuntimeError(
"Cannot find a working triton installation. Either the package is not installed or it is too old. More information on installing Triton can be found at https://github.com/openai/triton" # noqa: B950
)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Cannot find a working triton installation. Either the package is not installed or it is too old. More information on installing Triton can be found at https://github.com/openai/triton
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
++ handle_error
Please note: We are currently migrating Linux Wheel builds to Manywheel 2.28
++ echo 'Please note: We are currently migrating Linux Wheel builds to Manywheel 2.28'
++ echo 'If you see error like: ImportError: /lib64/libc.so.6: version GLIBC_2.28 not found'
++ echo 'Please migrate to: https://github.com/pytorch/test-infra/blob/main/.github/workflows/linux_job_v2.yml'
++ echo 'Issue: https://github.com/pytorch/pytorch/issues/123649'
If you see error like: ImportError: /lib64/libc.so.6: version GLIBC_2.28 not found
Please migrate to: https://github.com/pytorch/test-infra/blob/main/.github/workflows/linux_job_v2.yml
Issue: https://github.com/pytorch/pytorch/issues/123649
Error: Process completed with exit code 1.```
### Versions
2.6.0
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ezyang @gchanan @zou3519 @msaroufim
| true
|
2,745,736,903
|
[compiled autograd] Always proxy autograd.Function nodes; handle AOT backwards
|
zou3519
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd",
"ci-no-td"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144115
* #143417
* __->__ #143405
* #143387
* #143304
* #143296
We will always proxy autograd.Function nodes in compiled autograd's
initial graph capture (previously there was an
option to proxy vs trace into the autograd.Function)
We have some requirements for the AOTBackward. Compiled Autograd runs
accumulate grad reordering passes on the AOTBackward graph directly
after the initial graph capture, so we can't just proxy a single node for it.
Instead, we:
- proxy the AOTBackward prologue function into the CA graph
- copy-paste the AOTBackward graph into the CA graph
- trace directly through the epilogue (the traced nodes go into the CA
graph).
Tracing through the epilogue is safe (assuming no Tensor subclasses)
because the only thing the epilogue does is drop some outputs. The
Tensor subclass situation was already broken so this doesn't regress
anything but this PR sets it up to be fixed (in a followup, where we
will proxy "make_subclass" calls into the graph from the epilogue).
Test Plan:
- existing tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @xmfan
| true
|
2,745,728,947
|
[dynamo, 3.13t] raise error if torch.compile is attempted in 3.13t (nogil)
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: python version"
] | 7
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143404
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,745,706,715
|
Add warning to torch.jit.load
|
mikaylagawarecki
|
closed
|
[
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: jit",
"topic: docs"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143403
* #143326
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,745,696,819
|
(MTIA) Move "empty_cache" API
|
chaos5958
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
Summary: This diff moves one of memory-related APIs to the consolidated location, which is `mtia/memory.py`.
Test Plan:
```
buck2 test //mtia/host_runtime/torch_mtia/tests:test_torch_mtia_api
```
https://www.internalfb.com/intern/testinfra/testrun/13510798943184259
Reviewed By: nautsimon
Differential Revision: D67148738
| true
|
2,745,662,972
|
fix a few int64_t index computations, fix complex128 scan that had to…
|
ngimel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 3
|
COLLABORATOR
|
…o few threads
per title
cc @eqy
| true
|
2,745,643,754
|
Remove Android folder
|
msaroufim
|
closed
|
[
"ciflow/trunk",
"topic: deprecation",
"topic: not user facing",
"skip-pr-sanity-checks",
"suppress-bc-linter"
] | 10
|
MEMBER
|
This folder is a tutorial that is not packaged in PyTorch that's an example of how to use the now deprecated Lite Interpreter for Android
People should be using Executorch instead and there's already good documentation on it all over our tutorials and main homepage
Testing to see what breaks in CI
1. Removed this from CMakeLists.txt `add_subdirectory(android/pytorch_android)`
2. Remove fbjini from `.gitmodules`
A similar PR was sent here https://github.com/pytorch/pytorch/pull/143398
cc @albanD
| true
|
2,745,640,480
|
Fix unused variables in test/torch.py
|
rec
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143399
| true
|
2,745,625,457
|
Remove iOS folder
|
msaroufim
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: deprecation",
"topic: not user facing",
"skip-pr-sanity-checks",
"suppress-bc-linter"
] | 8
|
MEMBER
|
This folder is a tutorial that is not packaged in PyTorch that's an example of how to use the now deprecated Lite Interpreter
People should be using Executorch instead and there's already good documentation on it all over our tutorials and main homepage
Testing to see what breaks in CI
cc @albanD
| true
|
2,745,614,545
|
Enable torch.compile check on nightly validaitons
|
atalman
|
closed
|
[
"ciflow/binaries",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
I am seeing an error with torch.compile python 3.13. Please see https://github.com/pytorch/test-infra/issues/6077#issue-2745142173
Error:
```
esting smoke_test_compile for cuda and torch.float16
Traceback (most recent call last):
File "/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 385, in <module>
main()
~~~~^^
File "/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 379, in main
smoke_test_cuda(
~~~~~~~~~~~~~~~^
options.package, options.runtime_error_check, options.torch_compile_check
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 186, in smoke_test_cuda
smoke_test_compile("cuda" if torch.cuda.is_available() else "cpu")
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/pytorch/pytorch/.ci/pytorch/./smoke_test/smoke_test.py", line 286, in smoke_test_compile
x_pt2 = torch.compile(foo)(x)
File "/opt/conda/envs/conda-env-12375056109/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
```
Issue: https://github.com/pytorch/pytorch/issues/143406
We want to make sure to run the torch.compile check on nightly smoke testing. Runtime error check we still disable for now, since it may crash GPU.
| true
|
2,745,569,577
|
Fix unused Python variables in test/nn
|
rec
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143396
| true
|
2,745,568,912
|
[BE] Refactor argument parsing into its own function
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143513
* #143512
* #143511
* __->__ #143395
| true
|
2,745,568,705
|
[CD] Test that all PyTorch wheels support OpenMP
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143395
* __->__ #143394
* #143393
Together with https://github.com/pytorch/pytorch/pull/143393 fixes https://github.com/pytorch/pytorch/issues/123225
| true
|
2,745,568,243
|
[CD] Run smoke tests on MacOS wheel
|
malfet
|
closed
|
[
"Merged",
"release notes: releng",
"topic: improvements",
"ciflow/binaries_wheel"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143395
* #143394
* __->__ #143393
cc @seemethere
| true
|
2,745,555,115
|
Prevent users from seeing hardcoded print stmt when hypothesis is not installed
|
pytorchbot
|
closed
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #142398
Fixes: #142357
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,745,521,423
|
add some logging for tensorify
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143391
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,745,378,532
|
[AOTI][doc] Update tutorial
|
desertfire
|
closed
|
[
"Merged",
"topic: docs",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143390
Summary: Update the cpp inference part to call AOTIModelPackageLoader.run directly
| true
|
2,745,316,176
|
Fix unused variables in test_serialize_sym_float
|
rec
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143389
| true
|
2,745,306,250
|
TEST Windows build
|
atalman
|
closed
|
[
"Stale",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,745,300,626
|
[compiled autograd] Proxy nodes for user-defined C++ torch::autograd::Function
|
zou3519
|
closed
|
[
"oncall: jit",
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"module: compiled autograd",
"ci-no-td"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144115
* #143417
* #143405
* __->__ #143387
* #143304
* #143296
We define a functional version of a C++ torch::autograd::Function. The
functional version reconstructs the ctx object and then calls
backward with it.
Some more details:
- we define how to pack/unpack ctx.saved_data into an IValue. It's a
Dict[str, IValue], so it wasn't difficult.
- every call to CppNode::apply_with_saved binds a new function to
Python. This is because we're unable to reuse the a previously bound
function for reasons (the schema may change depending on what the user
actually puts into their Dict[str, IValue]).
Test Plan:
- existing tests
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @xmfan
| true
|
2,745,231,538
|
[draft] PP with compile test
|
H-Huang
|
open
|
[
"oncall: distributed",
"Stale",
"topic: not user facing"
] | 4
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143386
* #143599
testing command:
`python test/distributed/_composable/test_composability/test_pp_composability.py -k test_pp_with_compile`
cc @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,744,962,913
|
Fix PythonMod printing for C++
|
isuruf
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #140756
* #139899
* #143164
* __->__ #143385
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,744,480,939
|
[CPU][Brgemm] add support for int8 brgemm
|
Valentine233
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 17
|
COLLABORATOR
|
For INT8 SDPA kernel usage, we add support for INT8 Brgemm.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,744,337,982
|
[codemod] Decorate unused variables with `[[maybe_unused]]`
|
r-barnes
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Summary:
LLVM-15 has a warning `-Wunused-variable` which we treat as an error because it's so often diagnostic of a code issue. Unused variables can compromise readability or, worse, performance.
This diff either (a) removes an unused variable and, possibly, it's associated code or (b) qualifies the variable with `[[maybe_unused]]`.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
Reviewed By: palmje
| true
|
2,744,289,973
|
Add 2 more APIs to the exposed public torch python APIs
|
manav-a
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
These two APIs are being used internally for some projects and need to be exposed as the build for this is done using OSS toolchain.
https://github.com/pytorch/pytorch/commit/af8789c05654477649e4d99e6a253a2ebd81ad9e - this change hid most apis in torch python barring the ones explicitly specified breaking the build.
| true
|
2,744,267,089
|
Running decompositions over torch.export program adds one more output to the graph result
|
vivekkhandelwal1
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
On running the decomposition over the exported program through the torch.export it manipulates the graph such that the no. of results returned by the manipulated graph are 1 more than the original graph. Below is the code to repro the issue:
```
import torch
from torch.export import export
class Mod(torch.nn.Module):
def forward(self, x, noise):
res = torch.ops.aten.rrelu_with_noise(
x, noise, 0.4, 0.6, True
)
return torch.mean(res)
example_args = (torch.randn(10, 10), torch.randn(10, 10))
exported_program: torch.export.ExportedProgram = export(
Mod(), args=example_args
)
print("Exported program:\n", exported_program)
decomposed_program = exported_program.run_decompositions([])
print("Decomposed program:\n", decomposed_program)
```
Original Graph:
```
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[10, 10]", noise: "f32[10, 10]"):
# File: /home/vivek-pc/work/torch-mlir-vivek/fx_repro.py:6 in forward, code: res = torch.ops.aten.rrelu_with_noise(
rrelu_with_noise: "f32[10, 10]" = torch.ops.aten.rrelu_with_noise.default(x, noise, 0.4, 0.6, True); x = noise = None
# File: /home/vivek-pc/work/torch-mlir-vivek/fx_repro.py:9 in forward, code: return torch.mean(res)
mean: "f32[]" = torch.ops.aten.mean.default(rrelu_with_noise); rrelu_with_noise = None
return (mean,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='noise'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='mean'), target=None)])
Range constraints: {}
```
Decomposed Graph:
```
class GraphModule(torch.nn.Module):
def forward(self, x: "f32[10, 10]", noise: "f32[10, 10]"):
# File: /home/vivek-pc/work/torch-mlir-vivek/fx_repro.py:6 in forward, code: res = torch.ops.aten.rrelu_with_noise(
rrelu_with_noise_functional = torch.ops.aten.rrelu_with_noise_functional.default(x, noise, 0.4, 0.6, True); x = noise = None
getitem: "f32[10, 10]" = rrelu_with_noise_functional[0]
getitem_1: "f32[10, 10]" = rrelu_with_noise_functional[1]; rrelu_with_noise_functional = None
# File: /home/vivek-pc/work/torch-mlir-vivek/fx_repro.py:9 in forward, code: return torch.mean(res)
mean: "f32[]" = torch.ops.aten.mean.default(getitem); getitem = None
return (getitem_1, mean)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='noise'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_INPUT_MUTATION: 6>, arg=TensorArgument(name='getitem_1'), target='noise'), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='mean'), target=None)])
Range constraints: {}
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241216+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: Could not collect
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.31.1
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-50-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5950X 16-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 51%
CPU max MHz: 5083.3979
CPU min MHz: 2200.0000
BogoMIPS: 6787.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.0rc1
[pip3] onnx==1.16.1
[pip3] torch==2.6.0.dev20241216+cpu
[pip3] torchvision==0.22.0.dev20241216+cpu
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,744,205,050
|
[export][dynamic shapes] log provenance for locals & symbols for non-strict
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export"
] | 14
|
CONTRIBUTOR
|
Adds `dtrace_structured` logging so when a guard or real-tensor propagation assert is added, the relevant user code with local symbolic values & free symbols are logged, e.g. from the draft export CLI report (soon to be added to tlparse):
1. Guard added:
```
1. Constraint violation error.
The specified input dynamic_shapes spec was found to be incorrect during tracing.
Specifically, this guard was added: Eq(s0, 3), where {'s0': "L['args'][0][0].size()[0]"}.
This occured at the following stacktrace:
File /data/users/pianpwk/pytorch/test/export/test_draft_export.py, lineno 267, in forward:
assert a.shape[0] == 3
Locals:
a: Tensor(shape: torch.Size([s0, 3]), stride: (3, 1), storage_offset: 0)
Symbols:
s0: L['args'][0][0].size()[0]
...
```
2. Real tensor propagation:
```
1. Data dependent error.
When exporting, we were unable to evaluate the value of `u2 < 0`.
This was encountered 8 times.
This occurred at the following stacktrace:
File /data/users/pianpwk/pytorch/test/export/test_draft_export.py, lineno 217, in forward:
return res[:c_item]
Locals:
res: Tensor(shape: torch.Size([u0, u1]), stride: (Max(1, u1), 1), storage_offset: 0)
c_item: u2
...
```
Currently the values are extracted from the traceback, and are only valid for non-strict; strict seems to require storing & fakifying locals in the frames reporting by `TracingContext`.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,744,090,915
|
Circular import in torch._dynamo.trace_rules
|
bzamecnik
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
When trying to import `torch._dynamo.trace_rules` (e.g. via `import lightning`) the code crashes on circular import.
How to reproduce:
```
import torch._dynamo.trace_rules
```
Traceback:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import torch._dynamo.trace_rules
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/_dynamo/__init__.py:2
1 import torch
----> 2 from . import convert_frame, eval_frame, resume_execution
3 from .backends.registry import list_backends, lookup_backend, register_backend
4 from .callback import callback_handler, on_compile_end, on_compile_start
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:48
45 from torch.utils._python_dispatch import _disable_current_modes
46 from torch.utils._traceback import format_traceback_short
---> 48 from . import config, exc, trace_rules
49 from .backends.registry import CompilerFn
50 from .bytecode_analysis import remove_dead_code, remove_pointless_jumps
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/_dynamo/trace_rules.py:52
49 from .resume_execution import TORCH_DYNAMO_RESUME_IN_PREFIX
50 from .utils import getfile, hashable, NP_SUPPORTED_MODULES, unwrap_if_wrapper
---> 52 from .variables import (
53 BuiltinVariable,
54 FunctorchHigherOrderVariable,
55 NestedUserFunctionVariable,
56 SkipFunctionVariable,
57 TorchInGraphFunctionVariable,
58 UserFunctionVariable,
59 UserMethodVariable,
60 )
63 if typing.TYPE_CHECKING:
64 from .variables.base import VariableTracker
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/_dynamo/variables/__init__.py:38
30 from .distributed import BackwardHookVariable, DistributedVariable, PlacementVariable
31 from .functions import (
32 FunctoolsPartialVariable,
33 NestedUserFunctionVariable,
(...)
36 UserMethodVariable,
37 )
---> 38 from .higher_order_ops import (
39 FunctorchHigherOrderVariable,
40 TorchHigherOrderOperatorVariable,
41 )
42 from .iter import (
43 CountIteratorVariable,
44 CycleIteratorVariable,
(...)
47 RepeatIteratorVariable,
48 )
49 from .lazy import LazyVariableTracker
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/_dynamo/variables/higher_order_ops.py:14
12 import torch.fx
13 import torch.nn
---> 14 import torch.onnx.operators
15 from torch._dynamo.utils import get_fake_value
16 from torch._dynamo.variables import ConstantVariable
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/onnx/__init__.py:49
36 from .errors import CheckerError # Backwards compatibility
37 from .utils import (
38 _optimize_graph,
39 _run_symbolic_function,
(...)
46 unregister_custom_op_symbolic,
47 )
---> 49 from ._internal.exporter import ( # usort:skip. needs to be last to avoid circular import
50 DiagnosticOptions,
51 ExportOptions,
52 ONNXProgram,
53 ONNXProgramSerializer,
54 ONNXRuntimeOptions,
55 InvalidExportOptionsError,
56 OnnxExporterError,
57 OnnxRegistry,
58 dynamo_export,
59 enable_fake_mode,
60 )
62 from ._internal.onnxruntime import (
63 is_onnxrt_backend_supported,
64 OrtBackend as _OrtBackend,
65 OrtBackendOptions as _OrtBackendOptions,
66 OrtExecutionProvider as _OrtExecutionProvider,
67 )
69 __all__ = [
70 # Modules
71 "symbolic_helper",
(...)
119 "is_onnxrt_backend_supported",
120 ]
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/onnx/_internal/exporter/__init__.py:13
1 __all__ = [
2 "ONNXRegistry",
3 "ONNXProgram",
(...)
9 "verification",
10 ]
12 from . import _testing as testing, _verification as verification
---> 13 from ._analysis import analyze
14 from ._compat import export_compat
15 from ._core import export, exported_program_to_ir
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_analysis.py:14
11 from typing import TYPE_CHECKING
13 import torch
---> 14 import torch._export.serde.schema
15 from torch.export import graph_signature
16 from torch.onnx._internal.exporter import _dispatching, _registration
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/_export/__init__.py:33
31 from torch._dynamo.exc import UserError, UserErrorType
32 from torch._dynamo.source import ConstantSource
---> 33 from torch._export.non_strict_utils import make_constraints
34 from torch._export.passes.collect_tracepoints_pass import CollectTracepointsPass
35 from torch._functorch.aot_autograd import aot_export_module, GraphSignature
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/_export/non_strict_utils.py:16
8 import torch.utils._pytree as pytree
9 from torch._dynamo.source import (
10 AttrSource,
11 GetItemSource,
(...)
14 TensorPropertySource,
15 )
---> 16 from torch._dynamo.variables.builder import TrackedFake
17 from torch._export.passes.add_runtime_assertions_for_constraints_pass import InputDim
18 from torch._export.passes.lift_constants_pass import ConstantAttrMap
File /opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py:73
52 from ..side_effects import SideEffects
53 from ..source import (
54 AttrSource,
55 CallMethodItemSource,
(...)
71 TupleIteratorGetItemSource,
72 )
---> 73 from ..trace_rules import (
74 is_callable_allowed,
75 is_numpy,
76 is_numpy_dtype,
77 is_numpy_type_info,
78 )
79 from ..utils import (
80 build_checkpoint_variable,
81 clone_input,
(...)
99 wrap_fake_exception,
100 )
102 from .base import MutableLocal, typestr, VariableTracker, VariableTrackerMeta
ImportError: cannot import name 'is_callable_allowed' from partially initialized module 'torch._dynamo.trace_rules' (most likely due to a circular import) (/opt/homebrew/Caskroom/miniforge/base/envs/rossum_311/lib/python3.11/site-packages/torch/_dynamo/trace_rules.py)
```
### Versions
```
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.7.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.31.2
Libc version: N/A
Python version: 3.11.5 | packaged by conda-forge | (main, Aug 27 2023, 03:33:12) [Clang 15.0.7 ] (64-bit runtime)
Python platform: macOS-14.7.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] mypy==1.4.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.18.0
[pip3] onnxruntime-silicon==1.16.0
[pip3] onnxscript==0.1.0.dev20241217
[pip3] pytorch-lightning==2.4.0
[pip3] skl2onnx==1.16.0
[pip3] tf2onnx==1.15.1
[pip3] torch==2.4.1
[pip3] torchdata==0.7.1
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.19.1
[pip3] tritonclient==2.46.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchvision 0.19.1 pypi_0 pypi
[conda] tritonclient 2.46.0 pypi_0 pypi
```
| true
|
2,744,062,599
|
Remove assert from partitioner.py
|
digantdesai
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 6
|
CONTRIBUTOR
|
Remove erroneous assert assuming a dependent (user) node to be in the partition. This partially reverts #136616 by removing the assert.
Tested locally with a failing ExecuTorch Arm test using
```
$ python -m examples.arm.aot_arm_compiler --model_name mv2 --target ethos-u55-128 --delegate --quantize
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,744,052,897
|
[MPS] Use metal shaders for all view ops
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 7
|
CONTRIBUTOR
|
Before this PR Metal shaders were used to scatter/gather 1-5 dimensional tensors.
This PR introduces generalized ones that could be used for any dimensionality and as results gets rid of 700+ lines complex and untested code that might not even work as expected.
Generalized gather shader looks as follows
```metal
kernel void gather_kernel_n(uint linear_index [[thread_position_in_grid]],
constant void * src_ [[buffer(0)]],
device void * dst_ [[buffer(1)]],
constant uint32_t * size [[buffer(2)]],
constant uint32_t * stride [[buffer(3)]],
constant uint32_t & numel [[buffer(4)]],
constant int32_t & ndim [[buffer(5)]]) {{
if (linear_index >= numel) return;
constant {0} * src = (constant {0} *)src_;
device {1} * dst = (device {1} *)dst_;
uint64_t src_offs = 0;
auto src_idx = linear_index;
for(int dim = ndim - 1; dim >= 0; --dim) {{
src_offs += stride[dim] * (src_idx % size[dim]);
src_idx /= size[dim];
}}
dst[linear_index] = cast<{1}>(src[src_offs]);
}}
```
Which, according to the following benchmark
```python
from timeit import default_timer
import torch
import torch.utils.cpp_extension
from torch.utils.benchmark import Measurement, Timer
t = Timer(
stmt=f"y.copy_(x);torch.mps.synchronize()",
setup=f"x=torch.rand(4, 5, 16, 64, 33, 24, dtype=torch.float32, device='mps')[:,:,:,:24,:24,];y=torch.empty(x.shape, device=x.device, dtype=x.dtype)",
language="python", timer=default_timer
)
print(t.blocked_autorange())
```
Is almost twice as fast as previous implementation (i.e. on Mac Book M2 Pro it returns 2.9ms for MPS version vs 1.5ms for shader one
On MacOS Sequoia [`gatherWithUpdatesTensor: indicesTensor:...`](https://developer.apple.com/documentation/metalperformanceshadersgraph/mpsgraph/gather(withupdatestensor:indicestensor:axis:batchdimensions:name:)?language=objc) crashes if invoked with complex data type, as one can see by running the code below
```swift
import Metal
import MetalPerformanceShadersGraph
func gatherComplexMPS(device: MTLDevice,
inp_buf: MTLBuffer, idx_buf: MTLBuffer,
out_buf: MTLBuffer,
inp_elem: Int, upd_elem: Int) {
let graph = MPSGraph()
let inputPlaceholder = graph.placeholder(shape: [inp_elem as NSNumber], dataType: .complexFloat32, name: nil)
let indicesPlaceholder = graph.placeholder(shape: [upd_elem as NSNumber], dataType: .int64, name: nil)
let outNode = graph.gather(withUpdatesTensor: inputPlaceholder, indicesTensor: indicesPlaceholder, axis: 0, batchDimensions: 0, name: nil)
let mpsInputBuffer = MPSGraphTensorData(inp_buf, shape: [inp_elem as NSNumber], dataType: .complexFloat32)
let mpsIndicesBuffer = MPSGraphTensorData(idx_buf, shape: [upd_elem as NSNumber], dataType: .int64)
let mpsOutputBuffer = MPSGraphTensorData(out_buf, shape: [inp_elem as NSNumber], dataType: .complexFloat32)
guard let queue = device.makeCommandQueue() else { fatalError("Can't make queue") }
graph.run(with: queue, feeds: [inputPlaceholder: mpsInputBuffer,
indicesPlaceholder: mpsIndicesBuffer ],
targetOperations: nil, resultsDictionary: [outNode: mpsOutputBuffer])
}
func makeBufferWithValues<T>(device: MTLDevice, values: [T]) -> MTLBuffer {
guard let buf = device.makeBuffer(length: values.count * MemoryLayout<T>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") }
let buf_data = buf.contents().assumingMemoryBound(to: T.self)
for i in 0..<values.count {
buf_data[i] = values[i]
}
return buf
}
guard let device = MTLCopyAllDevices().first else { fatalError("Not Metal device found") }
print("Using device \(device.name)")
let inp_buf = makeBufferWithValues(device: device, values: [1.0, 2.0 , 3.0, 4.0, 5.0, 6.0, 7.0, 8.0])
let idx_buf = makeBufferWithValues(device: device, values: [0, 1, 2, 3])
guard let out_buf = device.makeBuffer(length:8 * MemoryLayout<Float>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") }
gatherComplexMPS(device: device, inp_buf: inp_buf, idx_buf: idx_buf, out_buf: out_buf, inp_elem: 4, upd_elem: 4)
```
Fixes https://github.com/pytorch/pytorch/issues/143140
| true
|
2,744,035,848
|
[Dynamo] Add DictKeySetVariable to capture dict_keys passed outside of compiled region
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143557
* #143547
* __->__ #143374
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,744,034,045
|
DISABLED test_disable_external_correlation (__main__.TestProfiler)
|
huydhn
|
open
|
[
"module: rocm",
"triaged",
"skipped"
] | 1
|
CONTRIBUTOR
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22profiler%2Ftest_profiler.py%3A%3ATestProfiler%3A%3Atest_disable_external_correlation%22%5D)).
The test is failing on ROCm https://github.com/pytorch/pytorch/pull/143314
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @sraikund16
| true
|
2,744,022,584
|
Error in DTensor uneven shard view op
|
guo-ran
|
closed
|
[
"oncall: distributed",
"module: dtensor"
] | 10
|
NONE
|
### 🐛 Describe the bug
**Description**:
There seems to be an issue when reshaping/viewing an unevenly sharded DTensor.
**Code to Reproduce:**
torchrun --nproc_per_node=4 test.py
In this test case has 512 elements on ranks 0-2 and 0 elements on rank 3. The error occurs during the view operation.
```
import torch
from torch.distributed._tensor import Replicate, Shard, distribute_tensor, DTensor
from torch.distributed.device_mesh import init_device_mesh
from datetime import timedelta
import torch.nn as nn
new_mesh = init_device_mesh("cuda", [4], mesh_dim_names=["tp"])
x=torch.randn((6,256),dtype=torch.float32,device="cuda")
def a(x):
d_x = DTensor.from_local(x, device_mesh=new_mesh, placements=[Replicate()])
d_x = d_x.redistribute(device_mesh=new_mesh, placements=[Shard(dim=0)])
d_x = d_x.view(-1)
return d_x
out=a(x)
```
**Error log:**
For ranks 0-2:
```
[rank0]: File "/opt/tiger/test.py", line 13, in a
[rank0]: d_x = d_x.view(-1)
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/torch/_compile.py", line 31, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/torch/_dynamo/eval_frame.py", line 600, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/torch/distributed/_tensor/api.py", line 309, in __torch_dispatch__
[rank0]: return DTensor._op_dispatcher.dispatch(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/torch/distributed/_tensor/_dispatch.py", line 205, in dispatch
[rank0]: local_results = op_call(*local_tensor_args, **op_info.local_kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/dist-packages/torch/_ops.py", line 667, in __call__
[rank0]: return self_._op(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: shape '[384]' is invalid for input of size 512
```
For rank 3:
```
RuntimeError: shape '[384]' is invalid for input of size 0
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,743,884,050
|
clean up type nits on torch/jit/_ir_utils.py
|
bobrenjc93
|
closed
|
[
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: jit",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143371
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,743,873,847
|
remove allow-untyped-defs for torch/_C/_lazy.pyi
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143370
| true
|
2,743,873,785
|
remove allow-untyped-defs for torch/_C/_distributed_autograd.pyi
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143369
| true
|
2,743,873,721
|
remove allow-untyped-defs for torch/utils/benchmark/examples/simple_timeit.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143368
| true
|
2,743,873,667
|
remove allow-untyped-defs for torch/_lazy/device_context.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143367
| true
|
2,743,873,611
|
remove allow-untyped-defs for torch/jit/_ir_utils.py
|
bobrenjc93
|
closed
|
[
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: jit",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143371
* #143370
* #143369
* #143368
* #143367
* __->__ #143366
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,743,859,610
|
Don't 1 specialize if stride is contiguous
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143365
Fixes: https://github.com/pytorch/pytorch/issues/142024
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,743,848,800
|
Support garbage collection after pt2 compilation
|
qiurc
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Summary:
Support garbage collection after pt2 compilation.
Add jk to control the global rollout / rollback of this functionality
Add env var to control individual job's rollout
Test Plan:
Test the model training job with / without this changes
Reviewers:
@yuxihu @ezyang , @Yuzhen11 ,
Subscribers:
Tasks:
Tags:
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,743,827,913
|
[Inductor] Register pattern match with dropout in torchao as a customized pass
|
Valentine233
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
COLLABORATOR
|
### Description
In https://github.com/pytorch/ao/pull/1372, we try to add a pattern match pass for int8 SDPA in torchao. For the pass in pytorch, the order is `joint_custom_pre -> remove_noop_ops -> constant_folding -> pattern_matcher -> fallback_random -> joint_custom_post`. As we know, the clone() from dropout would be removed by `remove_noop_ops`. If we register the pattern as a `joint_custom_pre_pass`, there is still the clone() when `joint_custom_pre`, causing the int8 SDPA pattern match failure. If we register the pattern as a `joint_custom_post_pass`, it would firstly hit `_partial_softmax_pattern` which changes the decomposed ops in softmax, also causing the match failure. As a result, the best position to put the int8 SDPA pattern match is the same as `pattern_matcher` in pytorch, but we haven't got such a choice for customized pass. Is it possible to move the position of `joint_custom_pre` or create a new customized pass?
### Versions
pytorch: main branch.
torchao: main branch.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @eellison @drisspg @Chillee @jerryzh168 @leslie-fang-intel
| true
|
2,743,792,299
|
[ROS Jazzy + WSL2] No module named 'torch' when launching a ROS node with virtualenv
|
nora-4853
|
open
|
[
"module: windows",
"triaged",
"module: wsl"
] | 0
|
NONE
|
Hello, I am currently working on a ROS Jazzy project inside WSL2 (Ubuntu). I am using a Python virtual environment to install libraries like torch. However, when I launch my node using roslaunch, I encounter this error:
[ERROR] [launch]: Caught exception in launch (see debug for traceback): No module named 'torch'
---
System setup:
OS: WSL2 (Ubuntu 22.04)
ROS version: ROS 2 Jazzy
Python version: Python 3.10
Virtual Environment: Created with venv
Installed libraries:
torch 2.5.1
torchvision 0.20.1
torchaudio 2.5.1
---
Steps I followed:
1. Created a virtual environment:
python3 -m venv ~/.venv
source ~/.venv/bin/activate
pip install torch torchvision torchaudio
2. Edited the shebang line in my Python script to point to the virtual environment:
#!/home/nora/.venv/bin/python3
3. Tested torch manually inside the virtual environment, and it works fine:
python3 -c "import torch; print(torch.version)"
4. However, when I use roslaunch to run the node, the error appears:
No module named 'torch'
---
What I have tried so far:
1. Verified that the environment variables like PYTHONPATH and PATH are pointing to the virtual environment.
2. Added the virtual environment path to PYTHONPATH inside the launch file:
<env name="PYTHONPATH" value="/home/nora/.venv/lib/python3.10/site-packages"/>
3. Checked the default Python interpreter used by ROS using which python3.
---
Question:
How can I ensure that ROS uses the virtual environment’s Python interpreter and libraries (like torch) when launching nodes?
Is there a standard way to make roslaunch work with virtual environments?
---
Tags:
#ros2
#python
#virtualenv
#wsl2
#torch
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,743,776,764
|
tools: Add a tool to build wheels for multiple python versions
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143361
Adds a tool to build bdist_wheels sequentially for multiple different
python versions (if specified).
The goal of this tool is to eventually be able to utilize this in our
binary build runs to significantly reduce the amount of time we take to
build packages by utilizing a local ccache from the first build.
Tested locally using the following:
```
$ ccache -C # clear cache
# -p could actually reference any python interpreter
$ python tools/packaging/build_wheel.py \
-p /home/eliuriegas/.local/share/uv/python/cpython-3.12.7-linux-x86_64-gnu/bin/python3.12 \
-p /home/eliuriegas/.local/share/uv/python/cpython-3.13.0-linux-x86_64-gnu/bin/python3.13 \
-d dist-multi/
...
2024-12-17 10:48:11,365 - INFO - Build time (3.12.7): 571.440689s
2024-12-17 10:48:11,365 - INFO - Build time (3.13.0): 191.147503s
```
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,743,738,633
|
[dcp] Add ZStandard transformer
|
mhorowitz
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/binaries",
"ciflow/trunk",
"release notes: distributed (checkpoint)",
"ci-no-td"
] | 18
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143360
* #145528
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @pradeepfn @ekr0
| true
|
2,743,738,567
|
[dcp] Integrate stream extensions into DCP impl
|
mhorowitz
|
closed
|
[
"oncall: distributed",
"Merged",
"release notes: distributed (checkpoint)"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143360
* __->__ #143359
* #143358
Summary: Updates FileSystemReader/Writer, Planner, DefaultLoad/SavePlanner
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @pradeepfn @ekr0
| true
|
2,743,738,499
|
[dcp] Add extension mechanism
|
mhorowitz
|
closed
|
[
"oncall: distributed",
"Merged",
"release notes: distributed (checkpoint)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143360
* #143359
* __->__ #143358
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @pradeepfn @ekr0
| true
|
2,743,738,418
|
[distributed] Fix _ReaderView.read() and readinto() to stop reading at the end of the slice
|
mhorowitz
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143360
* #143359
* #143358
* __->__ #143357
_ReaderView doesn't work correctly if the slice ends past the view.
read(-1) would call read(-1) on the base_stream, which would consume the entire underlying stream, even if the view ended before that.
read(n) would read n bytes, even if the view ended before that.
The new implementation clamps the size read to the size of the view.
readinto(b) would read len(b) bytes, even if the view ended before that.
Since the interface depends on the size of b, we use a (potentially) shortened view into b to avoid a copy. If the view doesn't contain enough data to fill the view, then this will appear as end of stream to the caller, which is the desired behavior.
This fix should not be user facing, since the bug is in an internal helper, and is only visible with new code down the stack.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @pradeepfn @ekr0
| true
|
2,743,735,315
|
Back out "Fix undesired specialization on slice after split. (#142372)"
|
laithsakka
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 23
|
CONTRIBUTOR
|
Summary:
Original commit changeset: e54ffcc9fd48
Original Phabricator Diff: D67113058
Reviewed By: ezyang
Differential Revision: D67311579
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,743,733,759
|
Enable more C++ warnings
|
cyyever
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"module: cpu",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: quantization",
"release notes: distributed (c10d)",
"ci-no-td",
"ciflow/s390"
] | 17
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @mingfeima @XiaobingSuper @ashokei @jingxu10
| true
|
2,743,724,854
|
[fr] recognize all_reduce_barrier as a valid op
|
c-p-i-o
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary:
D67068632 introduced a better profiling name for barrier operations to be able to distinguish various ops.
Unfortunately, this broke Flight Recorder Analysis with the following error as reported by dmwu
```
fr_trace -m torchx-param_bench_16g_mi300x-all_to_all -a 0 --mast_job_version 98 -w 16
Traceback (most recent call last):
File "/usr/local/fbcode/platform010/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/fbcode/platform010/lib/python3.10/runpy.py", line 86, in _run_code
```
Test Plan: Test manually.
Differential Revision: D67305997
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,743,717,888
|
[user triton cache] Dedup user-defined Triton kernels by config in codecache
|
aakhundov
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143353
Previously, the same kernel source with different autotuning configs would generate the same cache key which can lead to wrong cache it and silent incorrectness. Here we add the configs to the cache key in `FxGraphHashDetails`.
Test Plan:
```
python3 test/inductor/test_codecache.py -k test_triton_higher_order_op_different_configs
...
----------------------------------------------------------------------
Ran 2 tests in 3.590s
OK
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang
| true
|
2,743,716,583
|
[AOTI] Emit a CMakeLists.txt when package_cpp_only
|
desertfire
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: new features",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ciflow/rocm",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143352
Summary: Emit a CMakeLists.txt with compile and link options when package_cpp_only is specified. After unzipping AOTI generated .pt2 package file, user can manually build the generated model code in their local environment.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov
Differential Revision: [D67458526](https://our.internmc.facebook.com/intern/diff/D67458526)
| true
|
2,743,716,538
|
[AOTI] Fix a typo in cpp_builder.py
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143352
* __->__ #143351
* #143350
Summary: passthough -> passthrough
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov
| true
|
2,743,716,488
|
[AOTI] Refactor path operations in AotCodeCompiler
|
desertfire
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143352
* #143351
* __->__ #143350
Summary: Use safer pathlib operation instead of direct string manipulation; Update some path naming to make them more meaningful.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov
| true
|
2,743,716,171
|
Implement additional properties [ mean, stddev, variance] for TransformedDistribution
|
AlboAlby00
|
closed
|
[
"module: distributions",
"open source",
"Stale",
"topic: not user facing"
] | 5
|
NONE
|
Fixes #142825
I plan to implement the mean, stddev and variance for the TransformedDistribution that currently raise a NotImplementedError.
The idea would be adding a method "mean" to as many transformations as possible.
```python
def mean(self, base_distribution):
"""
Returns the mean of the transformed distribution
"""
raise NotImplementedError
```
I have currently just implemented the mean for the AffineTransformation and the ExponentialTrasformation with Normal as base distribution in order to receive some feedback about the general idea.
I will wait for some feedback and in case it is positive, I will continue with it.
For most of the other transformations there are two options I think:
* Compute the analytical solution for as many combinations of distributions and transformations as possible.
* Implement a Montecarlo simulation for the ones not implemented analitically.
Maybe it would be better not to implement anything with montecarlo, but instead keeping the NotImplementedError.
cc @fritzo @neerajprad @alicanb @nikitaved
| true
|
2,743,713,465
|
[ROCm] Get rid of extra rpath-link that was needed for libtinfo.
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 6
|
COLLABORATOR
|
Fixes #137858
Due to the extra rpath-link line inserted by these CMake lines, it is possible to unintentionally pick up other libraries that are incompatible with the version of ROCm in ${ROCM_PATH}.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,743,712,100
|
[MTIA] (3/n) Implement PyTorch APIs to query/reset device peak memory usage
|
chaos5958
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 31
|
CONTRIBUTOR
|
Summary: This diff implements the "max_memory_allocated" PyTorch API for MTIA devices, which returns the peak device DRAM usage
Test Plan:
Passed the local unit test
```
buck2 test //mtia/host_runtime/torch_mtia/tests:test_torch_mtia_api -- -r test_max_memory_allocated
```
https://www.internalfb.com/intern/testinfra/testrun/8444249544807192
Reviewed By: yuhc, egienvalue
Differential Revision: D67118173
| true
|
2,743,709,653
|
CUDA OOM error when inferencing with SwinTransformer ExportedProgram
|
rbavery
|
closed
|
[
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
I have a SWIN Transformer model that I was successfully exporting and inferencing with torch 2.3.0 using `torch._export.aot_compile` but that is now failing to inference with torch 2.5.1. With torch `2.3.0` and the `aot_compile` method I was able to inference with a batch size of 10. When I try to upgrade from torch 2.3.0 to torch 2.5 and use the new `torch.export.export` and `torch.export.load`, I run into at least two, possibly related issues.
There's no error during torch.export. Both strict and non strict work. I've attached a trace from TORCH_TRACE for the export/compile step.
[dedicated_log_torch_trace_6zqra37l.log](https://github.com/user-attachments/files/18158386/dedicated_log_torch_trace_6zqra37l.log)
The first issue is an error when logging in a `deserialiaze_sym_int` method, when I load the model that successfully exported with `torch.export.load`. There's also a lot of warnings about:
```
/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/graph.py:1586: UserWarning: Node backbone_backbone_backbone_features_1_1_attn_lifted_tensor_4 target backbone.backbone.backbone.features.1.1.attn.lifted_tensor_4 lifted_tensor_4 of backbone.backbone.backbone.features.1.1.attn does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does '
/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/graph.py:1593: UserWarning: Additional 428 warnings suppressed about get_attr references
warnings.warn(
```
The next is that when inference runs on a tensor with size `[1, 36, 1024, 0124]`, I get a CUDA OOM. This is surprising since when I previously exported the model with `torch._export.aot_compile`, I could run inference on a tensor with size `[10, 36, 1024, 1024]`.
Here is how I used to export the model as a CUDA shared lib with .cubin and .so files
```python
import json
import numpy as np
import torch
import tqdm
import sys
import os
sys.path.append("./satlas")
import satlas.model.evaluate
import satlas.model.model
import os
torch.set_float32_matmul_precision("high")
# os.environ["TORCH_LOGS"] = "+dynamic"
weights_path = "/home/finetuned_satlas_explorer_sentinel2_solar_farm.pth"
config_path = "./satlas/configs/satlas_explorer_solar_farm.txt"
# the result path must be unique to disambiguate between different aot_compile models
output_directory = "aot_inductor_model"
size = 1024
with open(config_path, "r") as f:
config = json.load(f)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for spec in config["Tasks"]:
if "Task" not in spec:
spec["Task"] = satlas.model.dataset.tasks[spec["Name"]]
model = satlas.model.model.Model(
{
"config": config["Model"],
"channels": config["Channels"],
"tasks": config["Tasks"],
}
)
device = "cuda"
state_dict = torch.load(weights_path, map_location=device)
model.load_state_dict(state_dict)
model.to(device)
model.eval()
bs_min = 2
with torch.no_grad():
test_im_ts = torch.randn((9 * 4, size, size)).to(device)
x = torch.stack([test_im_ts] * bs_min, dim=0)
outputs, _ = model(x)
os.makedirs(output_directory, exist_ok=True)
model_path = os.path.join(
os.getcwd(), output_directory, "solar_satlas_sentinel2_model_pt2.so"
)
# model should be 4 image time series 9 bands each
batch_dim = torch.export.Dim("batch", min=bs_min, max=13)
channel_dim = torch.export.Dim("channel")
height_dim = torch.export.Dim("height")
width_dim = torch.export.Dim("width")
if not os.path.exists(model_path):
so_path = torch._export.aot_compile(
f=model,
args=(x,),
# Specify the first dimension of the input x as dynamic
dynamic_shapes={"batch_tensor": {0: batch_dim}},
# Specify the generated shared library path
options={
"aot_inductor.output_path": model_path,
"max_autotune": True,
},
)
```
And here is how I now export the model
```python
import json
import torch
from torch.export import Dim
import os
from huggingface_hub import hf_hub_download
import satlas.model.evaluate
import satlas.model.model
torch._logging.set_logs(all=0)
full_path = os.path.dirname(os.path.abspath(__file__))
torch.set_float32_matmul_precision("high")
# os.environ["TORCH_LOGS"] = "+dynamic"
config_path = f"{full_path}/src/configs/satlas_explorer_solar_farm.txt"
size = 1024
with open(config_path, "r") as f:
config = json.load(f)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
for spec in config["Tasks"]:
if "Task" not in spec:
spec["Task"] = satlas.model.dataset.tasks[spec["Name"]]
model = satlas.model.model.Model(
{
"config": config["Model"],
"channels": config["Channels"],
"tasks": config["Tasks"],
}
)
def export_to_torchep(model, name, img_size=1024, save_dir='./compiled'):
"Save the model to pytorch ExportedProgram format."
bs_min = 2
dummy_batch = torch.randn(bs_min, 9*4, img_size, img_size).to("cuda")
# dynamic shapes for model export
batch_size = Dim("batch", min=2, max=20)
#height = Dim("height", min=2, max=2048)
#width = Dim("width", min=2, max=2048)
dynamic_shapes = {
"batch_tensor": {0: batch_size},
}
# Export the model to pytorch ExportedProgram format
with torch.no_grad():
ep = torch.export.export(
model.eval(),
(dummy_batch,),
dynamic_shapes=dynamic_shapes,
strict=True,
)
if not os.path.exists(save_dir):
os.makedirs(save_dir)
# Save the exported model
torch.export.save(ep, f"./{save_dir}/{name}")
print(
f"Model exported to pytorch ExportedProgram format: {os.path.abspath(save_dir)}/{name}" # noqa: E501
)
return ep
device = "cuda"
print(f"Downloading from Hugging Face Hub or loading from local dir...")
try:
# Provide the repository ID and filename on the Hub
weights_path = hf_hub_download(
repo_id="allenai/satlas-pretrain",
subfolder="finetuned_satlas_explorer_models_2023-07-24",
filename="finetuned_satlas_explorer_sentinel2_solar_farm.pth",
local_dir=full_path,
revision='5f6eff89d0675b7601bbe8c8d68956163ae07dd0'
)
print(f"Weights downloaded to: {weights_path}")
except Exception as e: # Catch potential download errors
print(f"Error downloading weights: {e}")
state_dict = torch.load(weights_path, map_location=device, weights_only=True)
model.load_state_dict(state_dict)
model.to(device)
print("Exporting model")
export_to_torchep(model, "satlas_solar_1024_exported.pt2")
```
Here is how I used to run inference
```python
def load_and_test_model():
import torch
model_path = "aot_inductor_model/satlas_pt2.so"
pt2_model = torch._export.aot_load(model_path, "cuda")
if torch.cuda.is_available() and torch.cuda.current_device() is not None:
device = torch.device("cuda" + ":" + str(torch.cuda.current_device()))
torch.cuda.set_device(device)
else:
raise Exception("needs to have cuda available")
test_im_ts = torch.randn((9 * 4, 256, 256)).to(device)
x = torch.stack(6 * [test_im_ts], dim=0)
outputs_aot, _ = pt2_model(x)
assert outputs_aot.shape == (1, 1, 256, 256)
```
and how I now run inference. I had to add `.module()` which I didn't used to have to do but I got an error when it was not present.
```
RuntimeError: Unable to call ExportedProgram directly. You should use `exported_program.module()` instead.
```
```python
def test_inference():
import torch
import os
full_path = os.path.dirname(os.path.abspath(__file__))
model_path = f"{full_path}/../../../compiled/satlas_solar_1024_exported.pt2"
pt2_model = torch.export.load(model_path).module()
test_img_ts = torch.randn((1, 9 * 4, 1024, 1024)).to(device)
outputs, _ = pt2_model(test_img_ts)
assert outputs.shape == (1, 1, 1024, 1024)
```
### Error logs
Full Traceback for the 1st issue with loading the model
```
→ uv run --group torchgpu --group satlas-solar python model-forge/satlas/solar/test.py
V1216 15:56:12.588000 222606 torch/fx/experimental/symbolic_shapes.py:2498] create_env
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] add_var_to_val s0 2
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] Stack (most recent call last):
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/model-forge/satlas/solar/test.py", line 16, in <module>
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] test_inference()
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/model-forge/satlas/solar/test.py", line 6, in test_inference
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] pt2_model = torch.export.load(model_path).module()
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/export/__init__.py", line 473, in load
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] ep = deserialize(artifact, expected_opset_version)
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 2437, in deserialize
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] .deserialize(
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 2316, in deserialize
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] .deserialize(
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1906, in deserialize
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] self.deserialize_graph(serialized_graph_module.graph)
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1612, in deserialize_graph
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] meta_val = self.deserialize_tensor_meta(tensor_value)
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1580, in deserialize_tensor_meta
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] tuple(self.deserialize_sym_int(val) for val in tensor_meta.sizes), # type: ignore[misc]
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1580, in <genexpr>
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] tuple(self.deserialize_sym_int(val) for val in tensor_meta.sizes), # type: ignore[misc]
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1517, in deserialize_sym_int
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] self.shape_env.add_var_to_val(sym, hint)
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 3588, in add_var_to_val
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:3588] log.debug("add_var_to_val %s %s", expr, val, stack_info=True)
V1216 15:56:13.479000 222606 torch/fx/experimental/symbolic_shapes.py:4727] _update_var_to_range s0 = VR[2, 20] (new)
I1216 15:56:13.480000 222606 torch/fx/experimental/symbolic_shapes.py:5481] constrain_symbol_range s0 [2, 20]
V1216 15:56:13.502000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval s0 >= 0 == True [statically known]
V1216 15:56:13.504000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval Eq(s0, 0) == False [statically known]
V1216 15:56:13.527000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 1024*s0 >= 0 == True [statically known]
V1216 15:56:13.528000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval Eq(1024*s0, 0) == False [statically known]
V1216 15:56:13.538000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 25165824*s0 > 1 == True [statically known]
V1216 15:56:13.654000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 256*s0 >= 0 == True [statically known]
V1216 15:56:13.655000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval Eq(256*s0, 0) == False [statically known]
V1216 15:56:13.664000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 12582912*s0 > 1 == True [statically known]
V1216 15:56:13.779000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 64*s0 >= 0 == True [statically known]
V1216 15:56:13.779000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval Eq(64*s0, 0) == False [statically known]
V1216 15:56:13.788000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 6291456*s0 > 1 == True [statically known]
V1216 15:56:14.495000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 16*s0 >= 0 == True [statically known]
V1216 15:56:14.496000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval Eq(16*s0, 0) == False [statically known]
V1216 15:56:14.505000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 3145728*s0 > 1 == True [statically known]
V1216 15:56:17.410000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 8388608*s0 > 128 == True [statically known]
V1216 15:56:17.418000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 4194304*s0 > 256 == True [statically known]
V1216 15:56:17.426000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 2097152*s0 > 512 == True [statically known]
V1216 15:56:17.434000 222606 torch/fx/experimental/symbolic_shapes.py:5201] eval 1048576*s0 > 1024 == True [statically known]
/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/export/_unlift.py:60: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
getattr_node = gm.graph.get_attr(lifted_node)
/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/graph.py:1586: UserWarning: Node backbone_backbone_backbone_features_1_1_attn_lifted_tensor_0 target backbone.backbone.backbone.features.1.1.attn.lifted_tensor_0 lifted_tensor_0 of backbone.backbone.backbone.features.1.1.attn does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does '
/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/graph.py:1586: UserWarning: Node backbone_backbone_backbone_features_1_1_attn_lifted_tensor_1 target backbone.backbone.backbone.features.1.1.attn.lifted_tensor_1 lifted_tensor_1 of backbone.backbone.backbone.features.1.1.attn does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
```
Full traceback for the second issue, when running that loaded model, CUDA OOM
```
warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does '
/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/graph.py:1586: UserWarning: Node backbone_backbone_backbone_features_1_1_attn_lifted_tensor_4 target backbone.backbone.backbone.features.1.1.attn.lifted_tensor_4 lifted_tensor_4 of backbone.backbone.backbone.features.1.1.attn does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(f'Node {node} target {node.target} {atom} of {seen_qualname} does '
/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/graph.py:1593: UserWarning: Additional 428 warnings suppressed about get_attr references
warnings.warn(
Traceback (most recent call last):
File "/home/rave/work/modelhub/model-forge/satlas/solar/test.py", line 16, in <module>
test_inference()
File "/home/rave/work/modelhub/model-forge/satlas/solar/test.py", line 13, in test_inference
outputs, _ = pt2_model(test_img_ts)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/graph_module.py", line 784, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/graph_module.py", line 361, in __call__
raise e
File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/fx/graph_module.py", line 348, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl
return inner()
^^^^^^^
File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1790, in inner
result = forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<eval_with_key>.4", line 10099, in forward
File "/home/rave/work/modelhub/.venv/lib/python3.11/site-packages/torch/_ops.py", line 716, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 24.00 MiB. GPU 0 has a total capacity of 23.68 GiB of which 48.44 MiB is free. Including non-PyTorch memory, this process has 22.91 GiB memory in use. Of the allocated memory 22.01 GiB is allocated by PyTorch, and 613.60 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
V1216 15:56:21.849000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats constrain_symbol_range: CacheInfo(hits=8, misses=1, maxsize=None, currsize=1)
V1216 15:56:21.849000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats evaluate_expr: CacheInfo(hits=10074, misses=18, maxsize=256, currsize=18)
V1216 15:56:21.850000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1216 15:56:21.850000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1216 15:56:21.850000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _find: CacheInfo(hits=240, misses=1, maxsize=None, currsize=1)
V1216 15:56:21.850000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1216 15:56:21.850000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V1216 15:56:21.850000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats simplify: CacheInfo(hits=0, misses=18, maxsize=None, currsize=18)
V1216 15:56:21.850000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1216 15:56:21.850000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats replace: CacheInfo(hits=281362, misses=353, maxsize=None, currsize=353)
V1216 15:56:21.850000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=18, maxsize=None, currsize=18)
V1216 15:56:21.851000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V1216 15:56:21.851000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats get_axioms: CacheInfo(hits=17, misses=1, maxsize=None, currsize=1)
V1216 15:56:21.851000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats safe_expand: CacheInfo(hits=51249, misses=422, maxsize=256, currsize=256)
V1216 15:56:21.851000 222606 torch/fx/experimental/symbolic_shapes.py:122] lru_cache_stats uninteresting_files: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
```
### Versions
unfortunately this doesn't work with uv/my env in uv.
```
Collecting environment information...
Traceback (most recent call last):
File "/home/rave/work/modelhub/collect_env.py", line 692, in <module>
main()
File "/home/rave/work/modelhub/collect_env.py", line 675, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "/home/rave/work/modelhub/collect_env.py", line 670, in get_pretty_env_info
return pretty_str(get_env_info())
^^^^^^^^^^^^^^
File "/home/rave/work/modelhub/collect_env.py", line 495, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rave/work/modelhub/collect_env.py", line 450, in get_pip_packages
for line in out.splitlines()
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
Here are my torch related requirements
```
torch==2.5.1 ; platform_system == 'Darwin'
torch==2.5.1+cu121 ; platform_system != 'Darwin'
torchvision==0.20.1 ; platform_system == 'Darwin'
torchvision==0.20.1+cu121 ; platform_system != 'Darwin'
tornado==6.4.2
tqdm==4.67.1
traitlets==5.14.3
triton==3.1.0 ; python_full_version < '3.13' and platform_machine == 'x86_64' and platform_system == 'Linux'
```
cc @chauhang @penguinwu
| true
|
2,743,706,749
|
Create build_directory if it does not exist when generating ninja build file
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Fixes: https://github.com/pytorch/vision/issues/8816
I am observing this failure on Windows, Python 3.13 vision builds:
```
Emitting ninja build file C:\actions-runner\_work\vision\vision\pytorch\vision\build\temp.win-amd64-cpython-313\Release\build.ninja...
error: [Errno 2] No such file or directory: 'C:\\actions-runner\\_work\\vision\\vision\\pytorch\\vision\\build\\temp.win-amd64-cpython-313\\Release\\build.ninja'
ERROR conda.cli.main_run:execute(49): `conda run packaging/windows/internal/vc_env_helper.bat python setup.py bdist_wheel` failed. (See above for error)
```
Adding the code above fixes it, confirmed by running `` python setup.py bdist_wheel`` :
```
building 'torchvision._C' extension
Emitting ninja build file C:\actions-runner\_work\vision\vision\pytorch\vision\build\temp.win-amd64-cpython-313\Release\build.ninja...
Creating build directory C:\actions-runner\_work\vision\vision\pytorch\vision\build\temp.win-amd64-cpython-313\Release
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/26] cl /showIncludes /nologo /O2 /W3 /GL /DNDEBUG /MD /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc -Dtorchvision_EXPORTS -IC:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include\torch\csrc\api\include -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include\TH -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include\THC -IC:\actions-runner\_work\_temp\conda_environment_12361066769\include -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Include "-IC:\Pr
```
| true
|
2,743,657,859
|
[FlexAttention] Fix broken eager tracing
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"bug",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/rocm",
"module: flex attention"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143103
* __->__ #143344
* #143299
Fixes #143331
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng
| true
|
2,743,626,220
|
Add float8 support in serde schema
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 10
|
CONTRIBUTOR
|
Summary:
Fix https://github.com/pytorch/pytorch/issues/141316
Bump up schema minor version.
as title, add float8 support in serde schema
Test Plan:
```
buck2 run 'fbcode//mode/dev-nosan' fbcode//caffe2/test:test_export -- -r test_serialize_float8
```
Differential Revision: D67307670
| true
|
2,743,625,479
|
Add config.save.use_pinned_memory_for_d2h to serialization config
|
mikaylagawarecki
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: new features"
] | 8
|
CONTRIBUTOR
|
This was benchmarked with two separate scripts on my A100
(A) Save state_dict of llama3-style model on CUDA to disk with ``torch.save``
(B) Save `ModuleList` of 10 `nn.Linear(10,000, 10,000)` on CUDA to disk with `torch.save`
Timings are an average of 5 runs and benchmark scripts + results are attached
Under both scenarios, we see **~2x speedup in ``torch.save`` time with (``compute_crc32=False`` and ``use_pinned_memory_for_d2h=True``)** compared to the baseline of the current defaults (``compute_crc32=True`` and ``use_pinned_memory_for_d2h=False``
(A) Save state_dict of llama3-style model on CUDA to disk with ``torch.save`` [[script](https://gist.github.com/mikaylagawarecki/d3a86ea1bb08045d1a839976808d7432)][[results](https://gist.github.com/mikaylagawarecki/f61a4714e5cff703146a1fcb7e0c755c)]
| | use_pinned_memory_for_d2h=False (Default) | use_pinned_memory_for_d2h=True |
|-|-|-|
| `compute_crc_32= True` (Default)| 28.54s | 20.76s |
| `compute_crc_32 = False` | 22.57s | **14.51s** |
(B) Save `ModuleList` of 10 `nn.Linear(10,000, 10,000)` on CUDA to disk with `torch.save` [[script](https://gist.github.com/mikaylagawarecki/ecbc505436bdd4b5190ef1b3430c12b6)][[results](https://gist.github.com/mikaylagawarecki/4e686bcf030b57de8c3ca74d8f5a88f7)]
| | use_pinned_memory_for_d2h=False (Default) | use_pinned_memory_for_d2h=True |
|-|-|-|
| `compute_crc_32= True` (Default)| 8.38s | 5.53s |
| `compute_crc_32 = False` | 6.94s | **3.99s** |
Trace of (A) with `use_pinned_memory_for_d2h=True`, `compute_crc32=False`
<img width="1745" alt="Screenshot 2024-12-16 at 7 32 33 PM" src="https://github.com/user-attachments/assets/80b87a8c-5a70-4eb9-ad66-7abc4aa7cc25" />
Baseline trace of (A) with `use_pinned_memory_for_d2h=False`, `compute_crc32=True`
<img width="1799" alt="Screenshot 2024-12-16 at 7 38 20 PM" src="https://github.com/user-attachments/assets/13fa12d1-8f5f-424c-adc4-275b67012927" />
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143342
* #143324
| true
|
2,743,623,072
|
[RFC] Introduce cache hot loading APIs (a.k.a. "Mega-cache")
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143341
This PR essentially introduces two new APIs
* torch.compiler.save_cache_artifacts
* torch.compiler.load_cache_artifacts
which aim to create a mega cache experience where the user can start collecting cache artifacts, and later call the save API to fetch them. In the next attempt, the user can "hot load" the cache artifacts via the load function.
This bundling approach reduces the need to rely on porting individual files one by one, or relying on many network requests.
Note that these APIs CANNOT log to structured logging as these functions will be called before and after compilation, as opposed to during compilation. Due to this limitation, the API returns a struct that the user can log with.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,743,622,766
|
[dynamo, eval frame] manually manage ExtraState memory
|
williamwen42
|
closed
|
[
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143340
Fixes https://github.com/pytorch/pytorch/issues/140998
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,743,595,553
|
[AOTI] Add is_big_gpu checking to test_conv3d
|
desertfire
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: test_conv3d tests max-autotune, which is only supported for big_gpu.
Differential Revision: D67306331
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov
| true
|
2,743,593,120
|
c10::SmallVector unusable with gcc/g++ 12 and 13 with `-O3`
|
hugary1995
|
open
|
[
"needs reproduction",
"module: build",
"module: cpp",
"triaged"
] | 3
|
NONE
|
### 🐛 Describe the bug
Recently, github hosted runners for `ubuntu-latest` is migrating from ubuntu 22.04 to ubuntu 24.04. As part of that migration, the default compiler has changed from gcc 11.4 to gcc 13.2. A lot of my CI workflows using libtorch are failing during this migration.
All errors look like the following
```
[build] /home/thu/micromamba/envs/neml2/lib/python3.12/site-packages/torch/include/c10/util/SmallVector.h:139:19: error: ‘net’ may be used uninitialized [-Werror=maybe-uninitialized]
[build] 139 | Base::grow_pod(getFirstEl(), MinSize, TSize);
[build] | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[build] /home/thu/micromamba/envs/neml2/lib/python3.12/site-packages/torch/include/c10/util/SmallVector.h: In function ‘neml2::TensorShape neml2::utils::add_shapes(S&& ...) [with S = {c10::SmallVector<long int, 6>, c10::ArrayRef<long int>&}]’:
[build] /home/thu/micromamba/envs/neml2/lib/python3.12/site-packages/torch/include/c10/util/SmallVector.h:73:8: note: by argument 2 of type ‘const void*’ to ‘void c10::SmallVectorBase<Size_T>::grow_pod(const void*, size_t, size_t) [with Size_T = unsigned int]’ declared here
[build] 73 | void grow_pod(const void* FirstEl, size_t MinSize, size_t TSize);
[build] | ^~~~~~~~
[build] /home/thu/projects/neml2/include/neml2/misc/utils.h:303:15: note: ‘net’ declared here
[build] 303 | TensorShape net;
[build] | ^~~
```
Not all builds fail, and after some digging, I can confirm that the following conditions are needed to reproduce this error
1. libtorch that comes with pytorch==2.5.1
1. gcc/g++ 12.4/13.2/13.3
2. -O3
I have not checked newer versions of gcc/g++, and I have not checked previous versions of pytorch
However, I can confirm that with `-O2` and `-O0` the code compiles. Also, with gcc/g++ 11.4, the code compiles with all optimization levels. This led me to believe that this is a compiler bug, not a fault on the pytorch side.
### Versions
<details>
<summary>output</summary>
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.8 (++20240731025011+3b5b5c1ec4a3-1~exp1~20240731145104.143)
CMake version: version 3.30.5
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro M6000 24GB
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 2
Stepping: 1
CPU(s) scaling MHz: 44%
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4389.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 pti intel_ppin ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 10 MiB (40 instances)
L3 cache: 100 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-19
NUMA node1 CPU(s): 20-39
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch-tb-profiler==0.4.3
[pip3] triton==3.1.0
[conda] No relevant packages
```
</details>
cc @malfet @seemethere @jbschlosser
| true
|
2,743,590,742
|
An idea to improve Welford implementation in Inductor
|
shunting314
|
open
|
[
"triaged",
"oncall: pt2"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Inductor implements a one-pass Welford algorithm to compute variance. The basic idea is to maintain 3 metrics (weight, mean, m2) for each group of items, and implement a combine function to merge these metrics for two group of items.
- Weight represents the number of elements in the group
- mean represents the mean value of the elements in the group
- M2 represents $\sum(x_i - mean) ^ 2$ summing over each element within the group
There are 3 places this can be improved
1. instead of maintaining mean, we can just maintain sum. By maintaining mean, we need keep adapting the denominator when we combine two groups since the number of elements get changed
2. instead of maintaining M2, we can just maintain sum_of_square $\sum{x_i ^ 2}$ (i.e. equivalent to temporarily pretending the mean is 0). By maintaining M2, we need keep adapting for the fact that 'mean' get changed when we combining two groups.
3. By doing 1 & 2, we don't need track weight anymore. That means we use **LESS REGISTERS**.
The end result is equivalent to leveraging the following equation to compute variance:
$$BiasedVariance = \frac{\sum(x_i - mean) ^ 2}{n} = \frac{\sum{x_i ^ 2}}{n} - mean ^ 2$$
These optimization may not improve perf for every kernel. But I think it worth the effort for the following reasons
1. it simplifies the implementation quite a bit.
2. if the kernel is fused with surrounding kernels and there is register pressure, using less registers indeed helps perf.
cc @chauhang @penguinwu @jansel @Chillee @eellison @peterbell10 for comments
### Error logs
.
### Versions
.
| true
|
2,743,585,809
|
Add more detail description on error for arguments
|
nluu175
|
closed
|
[
"triaged",
"open source",
"Stale",
"module: dynamo"
] | 6
|
CONTRIBUTOR
|
Fixes #122129
- Added a more detail description for argument type checking error.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,743,575,336
|
xpu: implement communication collectives for XPU backend
|
dvrogozh
|
closed
|
[
"oncall: distributed",
"open source",
"Stale",
"release notes: xpu"
] | 4
|
CONTRIBUTOR
|
A draft with communication collectives APIs implementation for XPU backend. TODO:
- [ ] Consider where APIs should be exposed (`torch.cuda.comm` & `torch.xpu.comm` vs. `torch.nn.parallel.comm` & deprecation of `torch.cuda.comm`)
- [ ] Add tests based on: https://github.com/pytorch/pytorch/blob/6356690b3d283c65d5f990af911614cbb50b68be/test/test_cuda_multigpu.py#L1315
- [ ] Consider code reusage across cuda/comm.cpp and xpu/comm.cpp
Fixes: #143239
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,743,520,326
|
Applying python 'next' directly on a tensor is painfully slow
|
gailweiss
|
open
|
[
"triaged",
"enhancement",
"module: python frontend"
] | 4
|
NONE
|
### 🐛 Describe the bug
Applying the standard python `next` function directly to a pytorch tensor is mysteriously (and painfully) slow relative to iterating over the indices of that tensor and accessing each value independently. This is not due to, at least explicitly, breaking `next`: when printing the values being considered during the loop, only the expected values are considered.
Example: consider this code
```python
import torch
from time import process_time
def timed(f): # will allow us to time the functions below
def timed_f(*a, **kw):
start = process_time()
res = f(*a, **kw)
total = process_time() - start
print("TIME:", (" " * 4) + f.__name__, "took:", total, "s", flush=True)
return res
return timed_f
def predicate(loud):
def _predicate(b):
if loud: # will allow us to check which items are being considered in the `next` run
print(b)
return b>0.1 and b<0.9
return _predicate
@timed
def get_next_direct(a, pred): # apply next directly to a flat tensor
return next(v for v in a if pred(v))
@timed
def get_next_indirect(a, pred): # apply next indirectly, by iterating over the positions in the tensor
return next(a[i] for i in range(len(a)) if pred(a[i]))
a = torch.rand(int(1e7))
pred = predicate(False) # quiet run, for clean timing
get_next_direct(a, pred), get_next_indirect(a, pred)
print("now with printing calls to the predicate")
pred = predicate(True) # loud run, to see if the next is failing to break once the condition is satisfied - it is not, this is not the problem
get_next_direct(a, pred), get_next_indirect(a, pred)
```
The time difference between `get_next_direct` and `get_next_indirect` is enormous (demonstration with my specific run). As we can see when printing from the predicate, this is not due to `next` failing to break - only the expected evaluations are made:
```
TIME: get_next_direct took: 10.626148 s
TIME: get_next_indirect took: 0.00022200000000083264 s
now with printing calls to the predicate
testing: tensor(0.7398)
TIME: get_next_direct took: 10.294364999999992 s
testing: tensor(0.7398)
TIME: get_next_indirect took: 0.0006169999999912079 s
```
### Versions
PyTorch version: 2.2.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.5 (main, Sep 11 2023, 08:31:25) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.8
[pip3] numpy==1.26.3
[pip3] pytorch-lightning==2.0.3
[pip3] torch==2.2.0
[pip3] torchaudio==2.2.0
[pip3] torchmetrics==1.1.2
[pip3] torchvision==0.15.2a0
[conda] msgpack-numpy 0.4.8 pypi_0 pypi
[conda] numpy 1.26.3 py311he598dae_0
[conda] numpy-base 1.26.3 py311hfbfe69c_0
[conda] pytorch 2.2.0 py3.11_0 pytorch
[conda] pytorch-lightning 2.0.3 py311hca03da5_0
[conda] torchaudio 2.2.0 py311_cpu pytorch
[conda] torchmetrics 1.1.2 py311hca03da5_0
[conda] torchvision 0.15.2 cpu_py311he74fb5d_0
cc @msaroufim @albanD
| true
|
2,743,512,366
|
NJT linear_backward should not return inner tensor as-is
|
soulitzer
|
closed
|
[
"Merged",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143415
* __->__ #143333
Fixes debug=1 use-count checks https://github.com/pytorch/pytorch/actions/runs/12187808902/job/34002323481#step:22:2521
| true
|
2,743,506,849
|
[logging] A few fixes/updates to record_compilation_metrics
|
masnesral
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143332
Summary: Mostly cosmetic, but one bug fix:
* Bug fix: Make sure compile_id is converted to a string in the compilation metrics so it's printed as, e.g., "0/1" instead of "[0, 1]"
* Sort collections in `collection_to_str`
* Print non-string elements as `"<unknown>"` instead of None (since we don't expect non-strings)
* Move the population of the legacy metrics and any pre-processing to a new factory method in CompilationMetrics
Test Plan:
```
python test/dynamo/test_structured_trace.py
python test/dynamo/test_utils.py
```
Internal testing: https://fburl.com/scuba/dynamo_compile/sandbox/l0me8auf
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,743,504,401
|
FlexAttention: compilation produces wrong shapes on nightly
|
ViktorooReps
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 9
|
NONE
|
### 🐛 Describe the bug
I am implementing a Differential Transformer with FlexAttention. It requires specific Q/K/V head sizes since each Q and K heads are split in half for each attention call. So in practice our Q/K has 1/2 of the original head dimensionality, while V stays the same.
Here is how it looks more or less:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask
flex_attention = torch.compile(flex_attention, dynamic=False)
qk_dims = 64
v_dims = 128
q_heads = 8
kv_heads = 4
seq_len = 10000
class Rig(torch.nn.Module):
def __init__(self):
super().__init__()
self.k = torch.randn((1, kv_heads, seq_len, qk_dims), requires_grad=True, device='cuda', dtype=torch.float)
self.v = torch.randn((1, kv_heads, seq_len, v_dims), requires_grad=True, device='cuda', dtype=torch.float)
self.q = torch.randn((1, q_heads, seq_len, qk_dims), requires_grad=True, device='cuda', dtype=torch.float)
class FlexRig(Rig):
def forward(self):
u = flex_attention(
self.q, self.k, self.v,
enable_gqa=True,
)
print(self.q.shape, self.k.shape, self.v.shape, '->', u.shape)
```
Now, if we call `FlexRig` without compilation, the result is as expected:
```python
print('Flex without compile: ', end='')
FlexRig()()
```
```
Flex without compile: torch.Size([1, 8, 10000, 64]) torch.Size([1, 4, 10000, 64]) torch.Size([1, 4, 10000, 128]) -> torch.Size([1, 8, 10000, 128])
```
The dimensionality is inherited from V, and the number of heads from Q.
However, if we compile `FlexRig`:
```python
print('Flex with compile: ', end='')
torch.compile(FlexRig())()
```
```
Flex with compile: torch.Size([1, 8, 10000, 64]) torch.Size([1, 4, 10000, 64]) torch.Size([1, 4, 10000, 128]) -> torch.Size([1, 8, 10000, 64])
```
Both dimensionality and head count are inherited from Q! A 🐛.
### Versions
```
PyTorch version: 2.6.0.dev20241216+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (conda-forge gcc 12.1.0-17) 12.1.0
Clang version: Could not collect
CMake version: version 3.30.3
Libc version: glibc-2.35
Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9454 48-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3810.7910
CPU min MHz: 1500.0000
BogoMIPS: 5491.58
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241216+cu124
[pip3] torchaudio==2.6.0.dev20241216+cu124
[pip3] torchvision==0.22.0.dev20241216+cu124
[pip3] triton==3.1.0
[conda] numpy 2.1.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241216+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241216+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241216+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,743,504,112
|
[AoTI Minifier] UX Improvement
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 13
|
CONTRIBUTOR
|
Summary:
- When a user specify `TORCHINDUCTOR_MAX_AUTOTUNE=1` env variable, we add `config.max_autotune=True` to the generated minifier_launcher
- We should do this to other inductor configs as well in a followup Diff
Currently in dynamo and aoti minifier, if a config is overwritten by an env variable, the config will not show up in the config list in the minifier_launcher.py file. As a result, when running the minifier_launcher, they need to re-apply the same env variable.
This is:
1) not convenient for the users
2) if they copy-paste the minifier_launcher.py to us without including the env variable, we could be confused and not able to reproduce the error.
Underlying implementation change:
- Add `env_default` parameter to `codegen_config()`. If set, configs overriden by the env are not considered default.
Test Plan:
```
buck2 run 'fbcode//mode/dev-nosan' fbcode//caffe2/test:utils -- -r test_codegen_config
```
Differential Revision: D67299312
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,743,497,145
|
[ROCm][Windows] Disable roctracer-related code
|
m-gallus
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 15
|
CONTRIBUTOR
|
Currently, the roctracer for Windows is not available. This PR disables any mentions of its usage for Windows, and creates dummy functions for Windows to keep compatibility with existing code, but which warn the user about the lack of Windows' availability.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,743,488,161
|
Create build_directory if it does not exist when generating ninja build file
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Fixes: https://github.com/pytorch/vision/issues/8816
I am observing this failure on Windows, Python 3.13 vision builds:
```
Emitting ninja build file C:\actions-runner\_work\vision\vision\pytorch\vision\build\temp.win-amd64-cpython-313\Release\build.ninja...
error: [Errno 2] No such file or directory: 'C:\\actions-runner\\_work\\vision\\vision\\pytorch\\vision\\build\\temp.win-amd64-cpython-313\\Release\\build.ninja'
ERROR conda.cli.main_run:execute(49): `conda run packaging/windows/internal/vc_env_helper.bat python setup.py bdist_wheel` failed. (See above for error)
```
Adding the code above fixes it, confirmed by running `` python setup.py bdist_wheel`` :
```
building 'torchvision._C' extension
Emitting ninja build file C:\actions-runner\_work\vision\vision\pytorch\vision\build\temp.win-amd64-cpython-313\Release\build.ninja...
Creating build directory C:\actions-runner\_work\vision\vision\pytorch\vision\build\temp.win-amd64-cpython-313\Release
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/26] cl /showIncludes /nologo /O2 /W3 /GL /DNDEBUG /MD /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc -Dtorchvision_EXPORTS -IC:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include\torch\csrc\api\include -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include\TH -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Lib\site-packages\torch\include\THC -IC:\actions-runner\_work\_temp\conda_environment_12361066769\include -IC:\actions-runner\_work\_temp\conda_environment_12361066769\Include "-IC:\Pr
```
| true
|
2,743,471,868
|
Increase sharding for debug build
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/periodic",
"test-config/default"
] | 3
|
CONTRIBUTOR
|
It started timing out consistently and takes 3+ hours per shard
I assume its just that we slowly increase tests over time since I cannot find a dramatic jump recently
| true
|
2,743,469,023
|
Prevent torch.jit.load path in torch.load when weights_only=True
|
mikaylagawarecki
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: security"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143403
* __->__ #143326
| true
|
2,743,442,481
|
Add global config for serialization in torch.utils.serialization
|
mikaylagawarecki
|
closed
|
[] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143325
* #143324
| true
|
2,743,442,359
|
Refactor serialization getter/setters into torch.utils.serialization.config
|
mikaylagawarecki
|
closed
|
[
"Merged",
"release notes: python_frontend",
"topic: improvements"
] | 1
|
CONTRIBUTOR
|
Consolidate
- get/set_default_load_endianness
- get/set_default_mmap_options
- get/set_crc32_options
into one global dynamo-style config + allow global setting of mmap. The existing APIs are not removed and will get/set from the config (as they can't be removed for BC)
In #143459 I add the local (argument style) config
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143342
* __->__ #143324
| true
|
2,743,442,250
|
Various fix for memory leak in test autograd and dataloader
|
albanD
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143204
* __->__ #143323
* #143225
| true
|
2,743,412,641
|
[ROCm] Fix unit test: matmul_offline_tunableop
|
naromero77amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
Fixes #137936
The PR contains:
* Fix for `matmul_offline_tunableop`
* Clean-up try-finally blocks in UTs that don't use environment variables (`test_validator_tunableop_rocm`, `test_minimum_tuning_iteration_tunableop`, `test_disable_tuning_tunableop`)
* Avoid the use of environment variables in `minimum_tuning_iteration_tunableop`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,743,410,828
|
remove allow-untyped-defs for torch/masked/maskedtensor/creation.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143321
| true
|
2,743,410,736
|
remove allow-untyped-defs for torch/__config__.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143321
* __->__ #143320
* #143319
| true
|
2,743,410,640
|
remove allow-untyped-defs for torch/utils/_stats.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143319
| true
|
2,743,402,111
|
FileTimerClient: add retry logic on connect
|
d4l3k
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (torchelastic)",
"ci-no-td"
] | 8
|
MEMBER
|
Fixes #143188
The fifo server binds from a thread -- under rare cases the client connects before the server thread starts. This adds a retry when opening the fifo socket in non-blocking mode. This will wait up to 1s for the server to start which balances fast error messages while still providing some wiggle room on the server side.
Test plan:
```
pytest --minutes 10 test/distributed/elastic/timer/file_based_local_timer_test.py -k test_watchdog_call_count -x
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o @cdzhan
| true
|
2,743,343,662
|
easy: dynamo_config: sort keys and set values
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143317
* #143307
This will create consistent ordering of keys when writing, as well as
sorting sets before serializing
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,743,339,496
|
Enable swap on all Linux jobs
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default",
"no-runner-experiments"
] | 3
|
CONTRIBUTOR
|
A swapfile on Linux runner has been prepared by https://github.com/pytorch/test-infra/pull/6058. So this PR does 2 things:
* Start using the swapfile on all Linux build and test jobs
* Testing the rollout https://github.com/pytorch-labs/pytorch-gha-infra/pull/582
### Testing
Run `swapon` inside the container and the swapfile shows up correctly:
```
jenkins@259dfb0a314c:~/workspace$ swapon
NAME TYPE SIZE USED PRIO
/swapfile file 3G 256K -2
```
| true
|
2,743,294,058
|
remove nonowninglayout special case in require strides
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #141905
* __->__ #143315
NonOwningLayout is always constructed to a FixedLayout. We should handle it the same way as FixedLayout. Note - this case is very rare, I added an assertion here and no test/model failed.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,743,273,752
|
[Profiler] Add Optional Flag to turn off external correlations v2
|
sraikund16
|
closed
|
[
"enhancement",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler",
"topic: improvements",
"ciflow/rocm"
] | 5
|
CONTRIBUTOR
|
Summary: The original diff got reverted because its base commit was on a broken version of pytorch that was failing rocm tests. There is no indication that this diff had any effect on rocm. Had trouble rebasing the GH pr after revert and accidentally closed the PR so submitting again .
Test Plan: See original PR with same name
Differential Revision: D67293040
| true
|
2,743,268,403
|
[EXPERIMENTAL][dynamo] Turn on `inline_inbuilt_nn_modules` for fbcode
|
StrongerXi
|
closed
|
[
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145117
* __->__ #143313
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D67295821](https://our.internmc.facebook.com/intern/diff/D67295821)
| true
|
2,743,253,459
|
[dynamo] Add a lint rule to restrict what 3P library one can import
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143312
As title, this patch prevents developers from importing third party
libraries to patch things in Dynamo, unless there's no other easy
workaround (in which case one would add the library to the allowlist in
`import_linter.py`, as instructed by the lint error).
For instance, if we remove `einops` from the allowlist, we'd get this
```verbatim
>>> Lint for torch/_dynamo/decorators.py:
Error (IMPORT) Disallowed import
importing from einops is not allowed, if you believe there's a valid
reason, please add it to import_linter.py
608 |# Note: this carefully avoids eagerly import einops.
609 |# TODO: we should delete this whole _allow_in_graph_einops logic by approximately 2024 Q2
610 |def _allow_in_graph_einops():
>>> 611 | import einops
612 |
613 | try:
614 | # requires einops > 0.6.1, torch >= 2.0
Error (IMPORT) Disallowed import
importing from einops is not allowed, if you believe there's a valid
reason, please add it to import_linter.py
612 |
613 | try:
614 | # requires einops > 0.6.1, torch >= 2.0
>>> 615 | from einops._torch_specific import ( # type: ignore[attr-defined] # noqa: F401
616 | _ops_were_registered_in_torchdynamo,
617 | )
618 |
```
| true
|
2,743,188,275
|
Script to generate NJT OpInfo testing report
|
jbschlosser
|
open
|
[
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143311
* #142063
I hacked together a script to parse skips / xfails for the OpInfo-based tests and visualize the status of passing tests in a table. Results are reported op-wise for `(forward, compile_forward, backward, compile_backward) x (contiguous, non-contiguous)`, including the (color-coded) success rate / number of xfails / number of skips.

~~TODO: It might be interesting to have a collapsible xfail / skip explorer?~~ Added this
Yes I know it looks terrible! I'm not a frontend dev and it's more important to visualize the data so I can prioritize fixes than spend time making it look nice.
TBD how to incorporate this into the NJT docs.
| true
|
2,743,178,634
|
Add a check on norm_type in LPPool2d
|
jackson-tsang578
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Fixes #134841. Added a check to ensure that the norm_type is not 0 to prevent the division by 0 error. Error messaging now informs the user that norm_type should not be 0.
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.