id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,042,367,779
|
get right function declaration on windows inductor
|
yuchengliu1
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"skip-url-lint"
] | 7
|
NONE
|
Fixes #152251
`get_export_declaration` introduced one more ')' in Windows platform, which cause this pattern of function declaration different with Linux.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,042,318,051
|
[Pipelining] Fix _batch_p2p bug for non-NCCL backends (#132644)
|
tom-pollak
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing",
"module: pipelining"
] | 9
|
NONE
|
Fixes #132644
`_batch_p2p` incorrectly assumes that `dist.batch_isend_irecv` returns a single-element list of `dist.Work`, likely due to NCCL's coalescing behaviour.
For none NCCL backends like Gloo, multiple `dist.Work` objects are returned, causing the code to discard some operations via `.pop()`. This leads to deadlocks during pipeline parallelism.
## Changes:
* Modified `_batch_p2p` to return `list[dist.Work]` instead of popping a single element.
* Added `_wait_batch_p2p` to call `wait()` on multiple `dist.Work` objects, consuming the result of `_batch_p2p`.
* Updated references from `dist.Work` to `list[dist.Work]`.
## Testing:
* `pippy_bert.py` from #132644 now works with gloo.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,042,187,678
|
[feature] Channel Wise Parallel API for Conv layers
|
mlreviewed
|
open
|
[
"oncall: distributed",
"triaged",
"open source"
] | 4
|
NONE
|
**How about a wrapper to shard Conv Layers? **
Currently we have ColumnWise Parallel that works for Linear and every other derivations like MoE, Attention, etc..
Sequence Parallel works for Norm layers.
So why not conv layers?
I have locally tested the Channel Parallel code and it shards conv layers at the out channel dim.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,042,127,488
|
Fix doc cosineannealinglr 152081
|
Jacobgoss30
|
open
|
[
"triaged",
"open source",
"topic: docs",
"release notes: optim"
] | 3
|
NONE
|
## Summary
This PR updates the docstring for `CosineAnnealingLR` to accurately reflect its recursive learning rate schedule. The previous docstring displayed only the SGDR closed-form expression, which doesn't match the actual recursive implementation in code.
Changes:
- Added the recursive update formula used in `get_lr()`
- Retained the original closed-form SGDR expression for reference
- Clarified that warm restarts are not implemented in this scheduler
This addresses confusion raised in issue #152081.
## Related issue
[#152081](https://github.com/pytorch/pytorch/issues/152081)
## Testing
Doc-only change. Ran pre-commit to verify formatting.
| true
|
3,042,060,634
|
Use `torch.types.Device` in `device_interface.py`
|
galexite
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: dynamo"
] | 19
|
CONTRIBUTOR
|
This is just a clean-up change that I noticed was possible; it removes the duplicate `_device_t` type which had the same semantics.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,042,031,201
|
[Dynamo] Allow inlining into AO quantization modules
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
This adds dynamo inlining into `torch.ao.quantization.fake_quantize`.
This is needed for QAT compatbility w/ an RL training model.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,042,024,191
|
Allow Inductor backends to attest their own availability
|
galexite
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
Introduce a new classmethod `raise_if_unavailable` to `BaseScheduling` and its subclasses, then call this method when creating the backend in `create_backend` which uses that scheduler implementation.
This allows Inductor backends to present meaningful messages to the user if prerequisites are not satisfied, by simply provide their own implementations of this classmethod. It also removes any hard-coded checks for particular backends (namely Triton) from `create_backend`.
For `TritonScheduling`, this removes the hardcoded check for Triton inside the generic `create_backend` with the use of the device interface's `raise_if_triton_unavailable`, added in [#152529](https://github.com/pytorch/pytorch/pull/152529).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,041,841,730
|
[WIP] Add unified memory APIs for torch.accelerator
|
guangyey
|
open
|
[
"open source"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151298
* __->__ #152932
* #138222
| true
|
3,041,704,573
|
DISABLED test_comprehensive_gather_xpu_bool (__main__.TestInductorOpInfoXPU)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 2
|
COLLABORATOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_gather_xpu_bool'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,041,702,372
|
DISABLED test_comprehensive_gather_xpu_int32 (__main__.TestInductorOpInfoXPU)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 2
|
COLLABORATOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_gather_xpu_int32'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,041,699,363
|
DISABLED test_comprehensive_gather_xpu_float16 (__main__.TestInductorOpInfoXPU)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 2
|
COLLABORATOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_gather_xpu_float16'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,041,696,791
|
DISABLED test_comprehensive_triu_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 8
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_triu_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41695542704).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_triu_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2234, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2281, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmp0xnlkdpa/de/cdeijko5f23fdh5s6uuq4lejigf5yvzzw54m62mwndxgzrmcoh6m.py", line 91, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 479, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 499, in _wait_futures
kernel = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmp9n38c8al/triton/ZPTW6GV4TNRCOU6JPZMES7HRK3M7NQH6QUUCQPYAQPUZW2H2AKZA/triton_poi_fused_triu_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(10, 10), device="cuda:0", dtype=torch.float16], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_triu_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,041,696,750
|
DISABLED test_comprehensive_rot90_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 5
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_rot90_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41695391042).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_rot90_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 857, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 2242, in bw_compiler
return inner_compile(
^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 727, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 845, in _compile_fx_inner
mb_compiled_graph, cache_info = FxGraphCache.load_with_key(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1405, in load_with_key
compiled_graph, cache_info = FxGraphCache._lookup_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 1156, in _lookup_graph
artifact_path = graph.after_deserialization(constants)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/output_code.py", line 709, in after_deserialization
code_cache = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmp18nwu1gg/gf/cgfae4yol7xtpj4utkixapxd27i3n3bvwd77adhvq744r4kflhwo.py", line 73, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 479, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 499, in _wait_futures
kernel = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3528, in result
self.static_autotuner.precompile( # type: ignore[union-attr]
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpgsov4dtf/triton/5G5FFW2HDGYLVIK5AQQBXIZT2XPRQ27FQL77NJHT3AJJLJOXKKTA/triton_poi_fused_rot90_0.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 15: SampleInput(input=Tensor[size=(5, 5, 5), device="cuda:0", dtype=torch.float16], args=(-1,(1,-1)), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=15 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_rot90_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,041,684,173
|
Forward fix D74196435
|
huydhn
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 8
|
CONTRIBUTOR
|
Summary: Forward fix a misplace declaration from D74196435
Test Plan: Random check with a failed build `buck2 build --config fbcode.enable_gpu_sections=true --flagfile fbcode//mode/opt fbcode//accelerators/workloads/models/emu_flash/tests:test_compile_eager`
Reviewed By: wdvr
Differential Revision: D74225582
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,041,681,752
|
DISABLED test_comprehensive_scatter_xpu_float16 (__main__.TestInductorOpInfoXPU)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 2
|
COLLABORATOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_float16'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,041,634,117
|
include user stacks with constraint violation error message
|
bobrenjc93
|
open
|
[
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #153119
* #153118
* __->__ #152924
Fixes #152918
Before:
```
File "/data/users/bobren/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5588, in produce_guards_verbose
raise ConstraintViolationError(
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['x'].size()[0])! For more information, run with TORCH_LOGS="+dynamic".
- You marked L['x'].size()[0] as dynamic but your code specialized it to be a constant (5). Either remove the mark_dynamic or use a less strict API such as maybe_mark_dynamic or Dim.AUTO.
```
After:
```
File "/data/users/bobren/a/pytorch/torch/fx/experimental/symbolic_shapes.py", line 5588, in produce_guards_verbose
raise ConstraintViolationError(
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['x'].size()[0])! For more information, run with TORCH_LOGS="+dynamic".
- You marked L['x'].size()[0] as dynamic but your code specialized it to be a constant (5). Either remove the mark_dynamic or use a less strict API such as maybe_mark_dynamic or Dim.AUTO.
User stack:
File "/home/bobren/local/a/pytorch/error.py", line 5, in foo
return torch.randn(5) * x
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,041,617,359
|
Upgrade to CUDA 12.8.1 for nightly binaries
|
tinglvv
|
open
|
[
"module: cuda",
"triaged",
"open source",
"ciflow/binaries",
"topic: not user facing"
] | 10
|
COLLABORATOR
|
Upgrade current CUDA 12.8 builds to 12.8.1
cc @ptrblck @msaroufim @eqy @jerryzh168 @atalman @malfet @nWEIdia
| true
|
3,041,613,992
|
Enable 12.8.1
|
tinglvv
|
open
|
[
"module: cuda",
"triaged"
] | 0
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
Issue to track upgrading to 12.8.1
Docker Images & Windows AMI Update
- [ ] https://github.com/pytorch/pytorch/pull/152923
### Alternatives
### Additional context
_No response_
cc @ptrblck @nWEIdia @atalman @malfet @msaroufim @eqy @jerryzh168
| true
|
3,041,596,533
|
[inductor][cpu] pytorch_CycleGAN_and_pix2pix AMP multiple thread performance regression in 2025-04-27 nightly release
|
zxd1997066
|
open
|
[
"oncall: cpu inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>AMP static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>pytorch_CycleGAN_and_pix2pix</td>
<td>multiple</td>
<td>1.0</td>
<td>1.927308</td>
<td>0.0307574</td>
<td>0.0592789830792</td>
<td>42.757727</td>
<td>1</td>
<td>2.266901</td>
<td>0.02525262</td>
<td>0.057245189530619994</td>
<td>43.144631</td>
<td>0.85</td>
<td>0.97</td>
<td>0.82</td>
<td>1.01</td>
</tr>
</tbody>
<table>
<p>AMP static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>pytorch_CycleGAN_and_pix2pix</td>
<td>multiple</td>
<td>1</td>
<td>1.374103</td>
<td>0.014603079</td>
<td>0.020066134663137</td>
<td>10.23786</td>
<td>1</td>
<td>2.122872</td>
<td>0.009595092</td>
<td>0.020369152144224</td>
<td>10.860537</td>
<td>0.65</td>
<td>1.02</td>
<td>0.66</td>
<td>1.06</td>
</tr>
</tbody>
</table>
<p>AMP dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>pytorch_CycleGAN_and_pix2pix</td>
<td>multiple</td>
<td>1</td>
<td>1.403394</td>
<td>0.014260671</td>
<td>0.020013340117374</td>
<td>10.228019</td>
<td>1</td>
<td>2.059172</td>
<td>0.009977453</td>
<td>0.020545291848916</td>
<td>10.900536</td>
<td>0.68</td>
<td>1.03</td>
<td>0.7</td>
<td>1.07</td>
</tr>
</tbody>
</table>
the bad commit: 68a7501dabb147d9fe7f343a33e1b91bacd3682b
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench pytorch_CycleGAN_and_pix2pix amp
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval pytorch_CycleGAN_and_pix2pix
running benchmark: 100%|██████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:04<00:00, 12.12it/s]
1.898x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,pytorch_CycleGAN_and_pix2pix,1,1.898450,27.790720,32.655342,0.877621,117.113651,133.444403,93,1,0,0,0,0,1
```
the last good commit: 015b526a2a2a6a01fc7d553680330c7d43c8144f
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench pytorch_CycleGAN_and_pix2pix amp
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval pytorch_CycleGAN_and_pix2pix
running benchmark: 100%|██████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 13.19it/s]
2.332x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,pytorch_CycleGAN_and_pix2pix,1,2.331836,23.077860,32.572655,0.928981,118.575104,127.639962,93,1,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>373ffb19</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>a0d440a26a555c34e87b90bef3bff960b34bb180</td>
<td>main</td>
<td>8eb21dffa9b1d0b55756ef94628f71bccfd5bbe9</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+d60ce09</td>
<td>main</td>
<td>2.6.0a0+bccaa45</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench pytorch_CycleGAN_and_pix2pix amp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/68a7501dabb147d9fe7f343a33e1b91bacd3682b
[torchbench-pytorch_CycleGAN_and_pix2pix-inference-amp-static-default-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/20053274/torchbench-pytorch_CycleGAN_and_pix2pix-inference-amp-static-default-multiple-performance-drop_guilty_commit.log)
cc @chuanqi129 @leslie-fang-intel
| true
|
3,041,592,820
|
Add overall tensor similarity comparison (#152647)
|
manojps
|
open
|
[
"open source"
] | 6
|
NONE
|
## Summary
This PR adds an overall similarity comparison requested in #152647
The comparison function is as same as https://github.com/deepseek-ai/DeepGEMM/blob/d374456787c51ad9c4b3c5dbc3668ea3ad9bb9c2/deep_gemm/utils.py#L161
## Background
Currently comparison of tensors can be done bit-wise or value-wise in `torch.testing.assert_close()`. This may not be desirable for quantized Tensors. An overall Tensor similarity measure has been requested. The `torch.testing.assert_close()` function is extended with following constraints to support the overall Tensor similarity comparison metric.
- Comparison function is implemented only for `torch.Tensor` data type
- Requires passing `tensor_similarity=True` as argument to `torch.testing.assert_close()`
- The value of absolute tolerance `atol` is used as tolerance measure for similarity
@pytorchbot label "module: testing"
| true
|
3,041,585,816
|
Fix conditional git diff in _link_check.yml
|
shoumikhin
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
| null | true
|
3,041,569,072
|
We should include where specialization happens when we throw a constraint violation error
|
bobrenjc93
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0
|
CONTRIBUTOR
|
Suggested by @Chillee
Here's an example repro: https://gist.github.com/bobrenjc93/bdbd0e16bcccb9280ba9113b5f647138
Which prints out this stack trace: https://gist.github.com/bobrenjc93/a9eb9b7a3ab6659e6238869eb72fedfc
But it doesn't point to the user LOC that actually caused the specialization
cc @chauhang @penguinwu @ezyang
| true
|
3,041,551,007
|
Clean up of CUTLASS_VERSION
|
narekmalk
|
closed
|
[
"open source"
] | 3
|
CONTRIBUTOR
|
Fixes #152847
| true
|
3,041,484,630
|
UNSTABLE inductor / unit-test / cuda12.6-py3.10-gcc9-sm86 / test (inductor_cpp_wrapper)
|
huydhn
|
open
|
[
"module: ci",
"triaged",
"unstable"
] | 1
|
CONTRIBUTOR
|
This is currently failing in trunk https://github.com/pytorch/pytorch/actions/runs/14849608374/job/41692546480, mark it as unstable while investigate
cc @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
3,041,426,136
|
[benchmarks] disable aten.to metadata assertions for AOTI
|
pianpwk
|
closed
|
[
"release notes: benchmark",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,041,377,722
|
Fix/cudagraph output reuse error
|
KAVYANSHTYAGI
|
closed
|
[
"open source",
"release notes: mps",
"module: inductor"
] | 3
|
NONE
|
### Summary
This PR improves developer experience when using `torch.compile()` with CUDA Graphs by:
- Detecting and raising an informative error if a user passes a CUDA graph output tensor into a subsequent compiled invocation
- Clearly guiding the user to correct this by either cloning the tensor or calling `torch.compiler.cudagraph_mark_step_begin()`
### Background
Currently, reusing CUDA graph outputs as inputs leads to a confusing runtime error due to memory aliasing and replay behavior. For example:
```
import torch
def f(x): return x + 1
f = torch.compile(f, mode="reduce-overhead")
x = torch.ones(2, device='cuda')
y = f(x)
z = f(y) # RuntimeError here
```
This fails because y is an output of a CUDA graph and gets aliased/reused internally by the graph runner.
Changes Introduced
-In cudagraph_trees.py, added a runtime check in _copy_inputs_and_remove_from_src to detect if any source tensors were previously returned by a CUDA graph
-Outputs of the CUDA graph (returned by node.run_first_inputs) are now tagged with _is_graph_output = True
-If reuse is detected, a descriptive error message is raised suggesting the proper fix
Example Error Message Raised
[torch.compile - CUDA Graphs Error] You are passing an output tensor from a previous CUDA graph run into a new invocation. This leads to memory aliasing and unsafe behavior.
Suggested fixes:
• Clone the tensor before reuse: y = f(x); z = f(y.clone())
• Or call torch.compiler.cudagraph_mark_step_begin() between calls to f()
More: https://pytorch.org/docs/stable/dynamo/troubleshooting.html#cuda-graph-reuse
Why This Matters
This common mistake can easily trip up users, especially those new to CUDA graphs or torch.compile. Making this behavior more transparent improves both usability and debuggability.
Fixes: #150706
cc: @ezyang @chauhang @bdhirsh @ngimel
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,041,377,125
|
Segmentation fault (core dumped) in torch.nn.functional.max_unpool2d
|
cx104906
|
open
|
[
"module: crash",
"triaged",
"topic: fuzzer"
] | 0
|
NONE
|
### 🐛 Describe the bug
same as issue #[152804](https://github.com/pytorch/pytorch/issues/152804). but reproduce with a self-contained script.
```
import torch
import pickle
print(torch.__version__)
print("test.....")
test = []
data1 = [4.8920e+11, 4.8920e+11, 4.8920e+11, 4.8920e+11, 4.8920e+11]
ts1 = torch.tensor(data1, dtype=torch.float32)
test.append(ts1)
data2 = [[[[[-1.7500, 1.7188, 2.5938],
[0.2188, -3.4062, -3.8125],
[-2.9688, -3.6250, 2.4375],
[2.4062, -1.2500, 3.8750],
[-3.8125, -3.4688, -3.1250],
[3.8438, 3.7188, 1.3438],
[-2.8125, -3.3438, -2.9375],
[0.3750, 3.5312, -1.9375],
[-2.4375, -3.0938, -1.1250]]],
[[[-0.3750, -2.7188, 0.3438],
[-0.9375, 3.5312, -3.1250],
[2.1875, -3.8438, 0.3438],
[-1.7500, -0.0625, 0.6562],
[2.2188, 3.4375, -1.4688],
[0.5625, -2.5000, -1.8750],
[1.7188, -2.3750, -0.7500],
[0.2188, -2.2500, 2.1250],
[1.0312, 0.0312, -0.0312]]],
[[[-3.5625, 1.5625, 3.3125],
[-3.0312, -2.0938, 2.1562],
[3.7812, 2.5938, -2.0000],
[-1.8125, 3.4062, 1.2500],
[2.1875, 0.1250, -1.0938],
[1.0000, -1.2812, 2.4062],
[-1.9688, -2.9688, -0.5625],
[-2.6250, 1.1250, 3.9688],
[2.3125, -0.7500, -1.2188]]],
[[[0.9375, -0.6250, 0.7188],
[-2.6250, 0.7188, 1.6875],
[-0.1875, 3.2188, -3.0312],
[-2.4375, 0.1875, 3.4688],
[0.3438, 3.6562, 1.4375],
[3.9062, -2.4688, -0.8125],
[1.7812, 2.1875, -0.6875],
[-0.7188, -1.6562, 0.5000],
[-0.8125, -0.4062, -0.7188]]],
[[[-1.3438, 3.4062, 3.9062],
[-3.5625, 1.0938, 3.9688],
[0.4375, -3.4375, 2.3438],
[3.8438, 1.7188, -2.6250],
[0.5312, -0.1875, -2.6250],
[-1.1250, -3.1875, 0.5625],
[1.9688, 2.5625, 1.4375],
[1.4375, -0.8125, -0.2812],
[2.1875, 2.1562, 0.3438]]],
[[[2.1875, 2.5625, -1.3125],
[-3.9062, -2.3438, -0.2500],
[3.3438, 1.0000, 2.9688],
[-1.9688, -3.5000, 3.6250],
[-2.0625, 1.0000, 0.3125],
[-1.4688, 0.9688, 3.6250],
[3.3750, 0.0312, -3.5000],
[-0.4688, 0.3438, -2.0938],
[2.4375, 0.8125, 2.9688]]],
[[[-3.1250, -1.6562, 3.6250],
[2.4688, -0.0312, -2.0625],
[1.6562, -0.3438, 3.1250],
[-1.7188, 1.1875, -0.2500],
[1.3438, -2.2500, -1.8125],
[-2.7500, 1.2500, -2.6562],
[3.4062, -2.0625, 2.8750],
[0.6875, 3.4375, 2.0625],
[3.5938, 3.4688, 0.6562]]],
[[[1.1875, 2.0312, 2.3438],
[2.0625, -2.3750, 0.6875],
[-1.1250, -3.7188, 3.8438],
[0.4688, 3.4688, 2.6562],
[-2.8438, 2.5000, -2.0625],
[-3.0625, -1.1562, 3.9688],
[-0.9062, -3.8750, 1.9375],
[-0.5312, -3.0000, -3.2812],
[0.5000, -1.9688, 2.9688]]],
[[[3.0000, 3.1250, -1.9375],
[0.7500, -0.9688, 3.6562],
[0.0625, 2.6875, 0.0312],
[0.2812, -3.4688, -0.2500],
[2.4062, -2.6875, 1.6250],
[2.9062, 1.3750, -2.9375],
[-0.9375, 1.1250, -3.4062],
[-0.1562, -3.4375, 2.8750],
[-0.7500, 0.7812, 3.1875]]]],
[[[[0.1562, -1.8750, 1.4375],
[-0.4375, -2.5938, -2.2500],
[2.6562, 2.3438, 0.3750],
[3.2812, 0.1250, -3.4062],
[-0.6562, 3.6875, 0.7188],
[-1.0312, 0.0938, 1.0625],
[-2.9062, 3.5312, 1.4375],
[-1.4062, -1.2812, -0.3125],
[3.7188, -2.1562, -2.5312]]],
[[[2.2812, 2.0938, 2.5000],
[-2.9062, -2.0312, -3.5000],
[-3.2188, -2.2188, 2.4375],
[-2.6562, -2.9375, -0.5938],
[-2.1250, -1.6250, -2.1250],
[-1.0625, 1.9375, -2.2188],
[-0.5938, -2.4062, -2.8750],
[3.7500, 2.3438, -1.7500],
[0.5312, -0.6562, 2.2188]]],
[[[2.6250, -2.7812, 2.4062],
[3.8438, -1.6250, -3.9688],
[1.4688, -0.2812, 0.5938],
[0.9688, -3.0312, 3.2188],
[-1.0000, 1.1250, 1.6562],
[-0.1562, 3.0000, -0.3750],
[0.5938, -1.5625, -1.7500],
[3.0938, -2.1562, 2.9375],
[-3.8438, -3.5000, 1.7812]]],
[[[-0.1875, -0.1250, 1.6562],
[0.5625, -2.7188, -0.1250],
[-3.0938, 0.9375, 3.1875],
[-1.8125, -1.1562, 3.6875],
[-3.0312, -3.8125, 2.1875],
[-2.8438, -0.8438, 0.8438],
[-2.7500, 2.5938, -3.3125],
[-3.3125, -0.7188, 0.5312],
[0.3438, -0.6250, -2.1250]]],
[[[3.1875, 3.2812, 1.5938],
[-1.0938, -2.0000, 1.5000],
[2.1875, 0.3750, 0.2500],
[-0.8125, -2.6875, 1.9688],
[3.1250, 3.0938, 1.5938],
[-2.7500, -1.6250, 0.4688],
[3.3750, -3.3125, 0.9062],
[2.6250, -0.1875, -3.8125],
[0.8125, -3.0938, 3.9688]]],
[[[3.3750, -0.2812, -2.5312],
[-1.9062, 1.5312, 0.3438],
[-3.0625, -2.2500, -2.2188],
[-2.0000, -3.6875, -2.2188],
[2.1250, 2.4688, -2.7188],
[-0.6562, 3.4062, 3.4062],
[1.2812, 1.5938, -2.0000],
[3.2500, -1.2188, 3.7812],
[2.4375, 3.6875, 3.3125]]],
[[[-2.6250, 3.0625, 3.6250],
[-2.3750, -3.8750, -2.4375],
[-3.4375, -3.3125, 0.2188],
[1.2188, -2.7500, -2.2812],
[-0.0625, 2.7500, -1.2188],
[1.8750, -2.2500, -1.9375],
[2.3750, -2.9688, -1.6250],
[-2.9062, 3.0625, 1.9375],
[3.0625, -0.9688, -2.1875]]],
[[[-0.6250, 0.1250, 0.5312],
[-3.4375, -1.3750, 1.0625],
[3.5625, 3.8438, 1.0625],
[3.4688, 2.7188, -2.4688],
[-3.1875, 2.9062, -0.4375],
[3.1562, -3.5312, 1.8438],
[-1.8438, -3.5312, 1.3438],
[2.8750, -1.0625, -1.8438],
[3.0938, 1.1562, -3.5312]]],
[[[1.8750, -0.5312, 0.6562],
[3.3125, -1.3438, -3.5625],
[-1.8438, 1.3750, 2.0000],
[0.8750, 2.3438, 1.4062],
[1.5000, -3.7188, -1.5938],
[0.5938, 2.6562, 1.3438],
[-3.8750, -2.9062, -2.7812],
[-1.6875, 1.7188, -0.3750],
[3.8438, 1.8125, -1.3125]]]],
[[[[-2.5000, 1.4375, 0.0312],
[2.9688, 0.2188, -3.7812],
[-3.6250, 2.6562, -0.6562],
[-3.1562, -2.1875, -0.7812],
[1.9375, 2.4688, 2.2500],
[3.2812, 0.6250, 3.5938],
[2.2188, -0.8438, 1.5625],
[2.1875, 0.3438, 0.3125],
[-1.5938, 0.5625, -0.1562]]],
[[[3.9375, 2.0000, -1.8125],
[-0.9688, 1.6250, -3.0625],
[3.7500, -2.4375, -4.0000],
[-2.4062, -3.7188, -2.5625],
[-0.2500, -3.5938, 2.3750],
[2.0625, -3.6562, 2.1250],
[3.8438, -3.3750, 1.5938],
[2.0938, 1.0938, 1.7188],
[-0.0625, 1.5312, -0.5000]]],
[[[-3.5000, 2.3438, -0.5312],
[2.6250, -3.3750, 2.7188],
[0.0000, -0.6875, -3.4688],
[3.8750, 1.2188, -0.9375],
[1.3438, 2.0000, 0.3438],
[1.2500, 3.9375, -2.3438],
[3.3125, -1.6250, -2.9688],
[0.6562, 2.2188, 0.0938],
[0.1875, 2.0625, 3.4375]]],
[[[-3.4375, 3.7500, 1.2188],
[-2.0000, -2.0938, -0.5312],
[2.0625, 3.0625, 0.6875],
[-3.0000, 0.6875, -3.2188],
[2.8438, -2.1875, -2.2500],
[0.5000, 2.7188, -3.7500],
[3.8438, 3.4688, 0.9375],
[-3.5312, 3.3125, 0.7188],
[-3.3750, -1.4688, 0.5938]]],
[[[-2.5938, 3.3438, -3.6562],
[1.5312, 0.3750, -0.3125],
[-3.7188, 3.8125, 3.5312],
[-1.2500, 1.0625, 2.4375],
[-2.2500, -3.1875, 2.8125],
[-0.7500, -3.9375, 1.1250],
[-1.7812, 1.8750, 0.8438],
[-0.7188, -0.5000, -1.6562],
[-0.7188, 1.9062, -3.2812]]],
[[[-3.6875, -2.0312, -3.3750],
[-2.3125, 3.2188, 2.0312],
[3.7188, -2.1875, 3.1562],
[-0.7500, -2.7812, 2.4375],
[-3.9062, -3.7500, -2.1250],
[0.3750, 1.8125, -1.0312],
[3.8750, 0.2500, 0.2500],
[-3.7188, -2.7188, -0.1250],
[3.9062, -2.5312, 2.8125]]],
[[[0.7500, 2.3438, -2.7812],
[-2.8438, 3.1875, -1.4062],
[0.4062, -2.9375, 3.2188],
[-1.7812, 2.2812, -3.1875],
[-2.3125, -2.6562, -1.9062],
[-1.0625, -1.4375, -2.6562],
[-0.0625, -3.3125, -2.0000],
[-2.2188, -2.3750, -2.9688],
[0.4062, 2.0312, -0.0938]]],
[[[-0.1875, 0.4375, 1.3047],
[-1.1250, 1.0312, 3.2188],
[-3.4062, -1.1875, -3.5312],
[-3.6562, 0.5938, 3.9062],
[-1.0938, -3.1562, -3.5000],
[3.1250, 3.9688, 0.4062],
[-1.2812, -3.3125, -2.9688],
[-3.7188, 3.9688, 0.0625],
[2.3750, -0.1250, -1.8125]]],
[[[1.8125, 3.7188, 3.1875],
[-2.2188, 3.5000, -1.7812],
[-2.2500, -2.5312, -1.9375],
[0.5312, -0.0312, -3.8750],
[2.3750, -1.1562, 3.7812],
[-2.6250, -1.0625, -2.2812],
[-2.1562, 2.0625, -3.6875],
[-0.7812, 2.1875, 2.5000],
[0.0312, 3.6562, -0.7812]]]],
[[[[-2.1250, 1.7812, 0.8125],
[1.8438, 0.9062, -3.9688],
[3.4375, 0.9062, -0.2500],
[-0.1562, 2.7188, 3.6875],
[2.5625, -3.8438, -2.2188],
[-1.2812, -0.5938, 3.5625],
[-1.7500, -2.1562, 2.0625],
[1.3438, -0.7812, 3.3750],
[0.8125, -3.6250, -2.3125]]],
[[[3.7500, 3.5938, 3.2500],
[3.1562, 0.5000, -3.4688],
[-1.1562, -3.9375, -1.0938],
[-3.2500, -3.2188, -2.6562],
[-3.1250, 3.3438, 0.7188],
[-1.5625, 3.3125, -3.6250],
[-0.3125, -0.2500, 3.1875],
[0.6562, -3.2812, 0.9375],
[2.4062, -1.8125, 0.0000]]],
[[[-2.8125, -3.7812, 3.5000],
[2.5312, 0.1875, -0.4062],
[3.4062, -0.8438, -2.7188],
[-2.8125, -1.4688, -1.0938],
[-1.5625, 1.7812, 2.8125],
[1.5312, -1.3438, -0.6562],
[-1.3125, 3.7812, -2.7500],
[-1.6250, 0.8438, 1.8125],
[1.8750, 1.4062, -3.5312]]],
[[[-2.7500, 1.3438, 0.5000],
[-2.2500, 1.4375, 1.0000],
[2.0938, 0.3750, -1.4062],
[-3.6562, -0.0312, 3.3438],
[-1.5312, 1.0312, 0.6875],
[0.2188, -0.4688, 2.6875],
[-1.9062, 2.3125, -3.3438],
[2.5312, 2.1562, 1.7500],
[1.6562, 3.0625, 2.2500]]],
[[[1.7500, -0.0938, 3.3438],
[2.4375, 2.4062, -4.0000],
[-0.3750, 3.3438, -1.4688],
[-0.8438, -0.0625, 3.0938],
[-3.9375, 2.2500, -1.8438],
[3.0312, -2.8125, -3.2812],
[1.8125, 3.0625, 3.1562],
[-2.5000, -3.2500, 1.9375],
[0.3438, 0.1250, 0.0000]]],
[[[0.0312, 0.8125, 0.7188],
[3.5938, -3.9062, 1.9688],
[1.6875, -2.5312, 2.1250],
[2.3125, 3.4062, -2.1562],
[0.5312, 3.6250, 0.1875],
[-0.9062, -2.6562, 2.4062],
[0.9062, 0.7812, 1.4062],
[-2.1562, -3.6562, 0.5312],
[-0.2500, -1.7812, -2.0938]]],
[[[3.3750, -3.0000, 0.4062],
[-3.3438, 1.3438, 0.4062],
[-0.1875, -1.1875, 0.9062],
[1.3438, 0.0625, 1.0938],
[1.8750, -0.3750, -1.2188],
[-2.5625, -1.4062, -2.2812],
[0.1250, -1.4688, -3.2812],
[3.7500, -0.3438, 0.5625],
[-3.0625, 0.4062, 1.5312]]],
[[[-3.4062, -0.3125, 0.1250],
[-3.0312, 1.7188, -0.4375],
[3.8750, -2.1250, 2.8750],
[2.0938, 3.6250, 0.2500],
[3.7188, -2.9688, -2.0625],
[3.5000, -2.5000, -2.9688],
[0.3125, 0.0625, 2.8125],
[0.7188, -1.4688, 1.3438],
[-3.9688, -2.6562, -2.3125]]],
[[[0.9062, -3.4375, 1.0625],
[-0.2188, -0.5625, -2.7188],
[-2.6875, 1.3750, 2.9688],
[-2.6562, -3.5625, -2.2812],
[2.3125, 0.0000, 1.9062],
[-1.1875, 1.8750, 2.6250],
[-3.5000, 3.3750, -0.3438],
[0.8125, -2.5938, -1.3125],
[1.3750, -3.5938, -1.2188]]]],
[[[[0.6250, -3.4062, 3.6562],
[-0.5000, 0.8125, 3.3750],
[1.5312, -0.5312, 2.0625],
[-3.8125, -2.6875, 2.4062],
[-1.4375, 3.7812, 0.2188],
[2.4688, 2.8438, 1.2812],
[0.8750, -2.3750, -1.1875],
[1.5625, 0.0938, 3.8125],
[3.9688, -0.0312, 3.7188]]],
[[[-2.9688, 3.8125, 3.6875],
[1.4062, -1.0938, -2.0312],
[2.1250, -1.2500, 1.6875],
[-0.4062, 2.4062, -3.2812],
[-0.5312, 3.7500, -3.9375],
[3.3125, -2.4375, 2.9062],
[-2.0000, 3.1250, -2.4688],
[0.6875, -0.5625, -3.5625],
[3.7812, -2.4375, -1.8125]]],
[[[-0.9375, -1.3125, 2.2188],
[-3.1562, -1.5938, -2.8125],
[1.4375, 1.7812, 0.6562],
[-0.1875, 2.8438, 2.1562],
[0.9062, 2.4375, -1.0938],
[2.4062, -1.5000, -0.9375],
[-1.8750, 2.0000, -2.7500],
[3.2812, -2.8750, 2.1562],
[2.5625, 3.7188, 1.0625]]],
[[[-1.9375, -3.3125, -2.5625],
[3.3750, 3.7812, -1.7188],
[1.0938, -2.7500, -0.7188],
[2.5000, 1.9688, 0.1250],
[2.0312, 0.7188, 2.1562],
[-2.2188, 0.0938, 1.3438],
[-1.2188, -0.5938, -2.5000],
[-0.3750, -4.0000, -0.1562],
[0.0000, -1.8125, 2.2812]]],
[[[-1.7500, -2.9688, -3.2188],
[0.7188, -0.2812, -2.6250],
[-1.3125, -1.8125, -1.3125],
[-1.6562, -0.9375, -3.0000],
[-0.3125, -2.5625, -3.1250],
[-0.0938, 1.3125, -0.5000],
[-3.2812, 2.7812, 2.8125],
[-0.3125, -3.1250, 1.0000],
[-1.4062, 2.5000, 3.6875]]],
[[[-1.4688, 1.8750, 0.1562],
[2.1562, -2.4062, -1.5000],
[2.8438, 1.4062, 3.0000],
[1.3750, -2.6250, 1.6562],
[-0.8750, -2.2500, 3.7500],
[0.4375, 0.7188, -1.0000],
[0.6562, 3.3438, -2.0938],
[0.2812, 0.6562, 1.5938],
[3.6250, -0.5938, -3.9062]]],
[[[-2.2188, 3.5312, -2.0938],
[2.3438, 2.0625, 2.7188],
[-1.5000, -1.5312, -0.3438],
[-0.0938, -1.0000, -3.8125],
[2.4688, 2.8438, -0.1875],
[3.8750, 0.6250, 2.9688],
[3.2500, 1.5625, 3.9688],
[1.3125, -3.7188, -3.6250],
[-0.8750, 1.4688, 2.0625]]],
[[[-3.9062, 2.5625, -1.0938],
[2.4688, 0.6562, -3.1562],
[2.2500, 2.3125, -1.4062],
[2.5000, 3.1562, -3.2812],
[-1.1875, 0.2812, 3.0938],
[-2.3438, -1.7812, 2.3750],
[-1.1562, 3.6562, -2.5938],
[2.5000, 2.6875, 3.5938],
[2.0938, 1.7500, -3.5625]]],
[[[-2.8750, -0.5625, 3.9062],
[1.9062, -2.5625, -0.8750],
[-3.3125, -0.5312, -0.0312],
[3.7812, 3.0000, 1.4375],
[-2.9375, -3.1562, 2.3125],
[-2.9062, -2.7188, -0.3438],
[-1.0312, -3.9375, -3.3125],
[-2.1250, -1.5000, -1.5000],
[3.1875, -1.4688, 1.9688]]]],
[[[[1.1875, -2.4688, -0.4688],
[2.5000, 0.5312, 0.2188],
[0.0625, 3.0000, 2.5625],
[-3.2812, 2.8750, -3.1562],
[0.8125, -3.8438, 1.5938],
[-3.8750, 3.9688, 2.2188],
[-2.3125, 3.2812, 3.3438],
[1.1562, -3.8125, 0.4062],
[-0.7500, 2.5000, 1.1250]]],
[[[1.7188, 2.0938, -0.2500],
[3.5312, -0.3438, -1.0312],
[-1.2500, -1.6875, -2.5938],
[3.6562, 1.4688, 240160367621363773330430099456.0000],
[-0.0000, 0.0000, 831230790598656.0000],
[196608.0000, 0.0000, 393216.0000],
[5.9688, 0.0000, 0.0000],
[114243489953582457009733632.0000, -1688849860263936.0000, 6258688.0000],
[226673591177742970257408.0000, -6597069766656.0000, -0.0000]]],
[[[-6562710028288.0000, -30991521874379752407791399965032448.0000, -3966914799920608308197299195524153344.0000],
[3966914799920608308197299195524153344.0000, 0.0000, 24576.0000],
[-0.0001, 430093764413882368.0000, -3.0000],
[0.0000, 0.0000, 3.0000],
[7933829599841216616394598391048306688.0000,
3.0000, 109527542937650462720.0000],
[0.0000, 973555660975280180349468061728768.0000, 0.0000],
[-12884901888.0000, 224312407936308147650560.0000, -0.0000],
[1.3125, 3.0938, 1.3125],
[3.6250, 1.9688, -3.0625]]],
[[[-0.9062, -0.0938, -1.7188],
[-3.8125, 3.8438, -2.3125],
[1.9375, 0.5000, 2.6250],
[-3.7188, -0.3438, -1.3438],
[-1.0000, -0.0625, -1.5000],
[-2.4375, 2.5625, 3.9688],
[1.9688, 2.7500, 1.3125],
[2.1875, 0.4375, -1.7188],
[-1.3750, -2.2188, 0.5938]]],
[[[-1.9688, -3.4062, 3.1875],
[3.0625, 2.9062, -2.5938],
[-3.6875, 1.1250, 2.8438],
[2.2812, 0.5312, 3.4062],
[-3.8125, 3.8125, -0.5625],
[2.7188, -2.7500, -2.0312],
[-1.9062, -0.1250, -0.2188],
[2.1562, 3.7812, -3.4375],
[-0.4375, -2.2812, 2.7188]]],
[[[2.8125, 0.3438, 3.6875],
[1.3125, -3.0625, -2.4375],
[-1.9062, -1.0938, -0.5938],
[0.8438, -2.7812, 0.3125],
[-2.5312, 2.2188, -0.3438],
[-0.5938, 0.0000, 14855280471424563298789490688.0000],
[0.0000, -0.0000, 0.0000],
[475368975085586025561263702016.0000,
30991521874379752407791399965032448.0000, -0.0000],
[0.0001, 0.0000, -0.0000]]],
[[[15414631298775269522199910977372160.0000, 0.0000, 121694457621910022543683507716096.0000],
[50331648.0000, 0.0000, -0.0000],
[0.0000, 0.0000, 0.0000],
[393216.0000, 0.0000, 105553116266496.0000],
[27021597764222976.0000, 0.0000, 0.0000],
[55340232221128654848.0000, 0.0000, -
3966914799920608308197299195524153344.0000],
[0.0000, -0.0000, 0.0000],
[0.0000, 393216.0000, -472893095007015265011465453568.0000],
[-0.0000, -0.7461, 3.4062]]],
[[[-1.8438, 2.9375, 3.8750],
[1.0000, -2.1250, -1.0312],
[2.7500, 0.9688, -0.2812],
[2.1875, -1.3438, -0.3125],
[0.2500, -1.0312, 3.2500],
[-2.0625, 3.1250, -1.4062],
[-0.6562, -0.5938, -3.7500],
[0.6250, 2.2500, 0.0312],
[3.9688, 1.2812, 2.3125]]],
[[[3.0938, -2.3125, 1.4062],
[3.0938, -1.9688, -1.7812],
[-4.0000, 1.1562, -1.3750],
[-3.5000, 3.3125, -3.9688],
[1.5938, -2.0000, 1.7188],
[2.8438, -1.9688, -2.3438],
[-1.2812, -3.1250, -1.5000],
[-2.3125, -3.1562, -0.9688],
[-2.1250, -0.6562, -1.4375]]]],
[[[[-2.6875, 2.3125, -1.1562],
[1.4375, 3.4688, 2.6875],
[-2.2500, 2.1562, 2.5625],
[-1.1562, 1.0000, -2.9688],
[-3.8750, -0.5938, 1.5938],
[1.3750, 0.5000, -1.9688],
[2.6250, -2.6875, -0.6250],
[-1.8750, 2.2188, -2.2812],
[3.7188, -2.1250, 3.9375]]],
[[[2.9688, -1.4062, -0.9062],
[-2.2500, 1.0000, 3.0312],
[-2.2188, -0.3438, 2.8750],
[-0.1562, 0.2812, 0.9062],
[3.5938, 1.8750, -1.1875],
[2.2188, 3.4375, 1.8750],
[3.0312, -1.9375, -2.7188],
[1.9688, 2.1250, -3.4062],
[-2.5312, -0.6250, -3.4375]]],
[[[-2.1562, 1.3438, -3.7812],
[-1.0312, -3.7188, -2.0000],
[-3.2812, -0.7500, 0.2188],
[-3.0312, 3.9688, -3.7500],
[-1.8438, -1.5938, -2.5625],
[-3.7500, -4.0000, -3.7188],
[1.9375, 1.6250, -0.8125],
[2.6562, -1.4688, -3.1562],
[-1.9062, -3.0000, 3.2500]]],
[[[1.2188, 0.1250, -3.1562],
[-1.3125, -1.4062, 2.8438],
[0.6875, -1.7812, -2.2812],
[1.9688, 1.1875, -1.6562],
[1.7500, -0.3750, 3.1250],
[3.7188, -2.0312, 2.8438],
[1.8438, -1.0312, -3.2500],
[-0.0938, -0.9062, 3.5312],
[0.7812, 0.9688, 3.3750]]],
[[[3.2188, 3.0625, -2.3750],
[2.6875, 1.2188, 1.5938],
[0.1250, 0.6875, -1.0938],
[0.0625, -0.2188, -1.4688],
[1.9688, 0.0312, -3.3125],
[1.7500, -3.1875, -0.8438],
[3.3438, 2.0000, -2.0625],
[-1.9688, -2.6562, 3.5000],
[2.6250, 3.4375, 2.3438]]],
[[[2.9375, 2.1562, 0.8125],
[3.0000, 3.1875, 2.8125],
[2.8438, -3.5000, -3.3125],
[-1.3125, -0.6562, -1.8750],
[-0.2188, 2.5000, 3.8438],
[3.6562, 0.5312, 3.7500],
[-2.2812, 0.1875, 0.6250],
[-2.4375, 3.4688, 2.2500],
[-0.7812, -1.5625, -2.2188]]],
[[[0.5312, -2.9062, 0.0938],
[1.4375, -1.4688, -2.0938],
[2.9062, -3.5938, -3.9375],
[-2.6875, -0.1250, -1.5312],
[2.0312, -0.6562, 3.6875],
[3.3438, 2.2812, -3.3438],
[-2.4062, 1.9375, -1.4375],
[0.2188, -0.7188, -1.5000],
[0.2500, 1.2812, -3.2500]]],
[[[3.0625, 2.9375, -4.0000],
[-3.0938, 2.6562, -3.4062],
[-0.2500, 3.5312, -0.7188],
[0.5625, 0.9688, -3.6250],
[-3.0312, -3.6875, -2.5625],
[-0.3125, 3.3438, 1.6875],
[-1.0312, 1.7188, -1.3438],
[2.7188, 3.9688, -0.9062],
[-2.2500, -2.0000, 2.1250]]],
[[[1.3750, -1.5312, -0.1875],
[-0.7500, 2.6875, 3.1250],
[1.6562, -1.8125, -1.7500],
[-1.0312, -3.4062, -2.5312],
[1.1875, -3.5938, 0.4375],
[2.4688, 3.8125, -2.0625],
[1.9688, -3.0312, -3.2500],
[-3.9062, 2.3438, 1.9375],
[3.8438, -2.0625, -0.7500]]]],
[[[[-3.8438, -0.3438, 3.3125],
[2.0000, -0.0625, -3.7812],
[-1.1875, 0.8438, 1.6875],
[2.0938, -0.2188, -0.0312],
[-2.9375, -2.1250, -3.5000],
[-0.8438, 3.8125, 3.8750],
[2.2188, -0.2500, 1.7500],
[0.0625, -2.6562, 2.0938],
[0.1250, 0.2188, 1.8438]]],
[[[-3.5938, 2.1562, -3.5625],
[-1.0000, -2.1250, -2.1562],
[-3.9688, 1.9062, -2.0625],
[-1.8438, -1.3125, 1.8438],
[-3.9688, -1.1250, -2.2812],
[-2.8750, -0.2500, -3.6562],
[1.0000, 0.3125, 2.5000],
[-2.9375, 3.3750, -3.5312],
[-2.9375, -2.6562, 1.5312]]],
[[[-3.7500, -0.6875, -2.4062],
[3.1562, 1.9688, 2.1250],
[-3.9062, 0.9062, -1.0625],
[-2.5625, 2.9375, -2.8125],
[-3.3125, -1.0938, 2.1875],
[-2.5625, 0.6562, -3.0000],
[-2.5938, 2.1875, 2.5938],
[0.7188, 3.9688, -1.3125],
[-0.4375, 2.0625, 3.9688]]],
[[[0.8438, 0.9688, -1.9688],
[-3.0625, -3.6875, -2.0625],
[-1.2812, 3.8750, 3.8750],
[2.8125, 1.1875, 3.8438],
[2.2500, 2.5938, -1.2500],
[1.8125, 1.0625, -3.3438],
[-1.5000, -3.4375, 2.8750],
[-0.3125, -3.6250, -3.2188],
[-3.4688, 0.2812, 0.8750]]],
[[[3.4375, -0.6875, 0.3438],
[-1.5312, -2.5312, 2.9688],
[-2.6250, 3.3438, -0.1250],
[-0.2188, -2.8750, -3.1250],
[3.8438, 1.7812, 1.1875],
[1.8438, 1.8125, 2.9688],
[0.8438, 0.1250, -3.3438],
[-1.6562, -0.0312, -2.5312],
[2.3125, -3.5312, -2.9375]]],
[[[0.7188, 0.4375, -3.4375],
[-0.3438, -0.7500, -0.2812],
[-0.6250, -0.1875, 3.7812],
[-3.6562, 3.0938, -0.6250],
[2.4062, 3.9375, -0.0312],
[0.2500, 1.6250, -0.7188],
[-0.0625, 2.4688, -0.1562],
[1.7500, 1.5938, 1.1562],
[3.0938, 3.6875, -1.1562]]],
[[[2.5000, 0.4375, -3.5625],
[1.3438, -2.6875, 2.1562],
[2.0000, -0.0625, 0.8750],
[1.9062, 1.9062, -2.5938],
[-0.1250, 2.3125, 3.7500],
[1.8125, -3.7812, 2.2812],
[0.8750, -1.6562, 1.0312],
[-0.2188, 3.0000, -0.6562],
[-2.5312, -3.7812, 1.8750]]],
[[[2.7812, 0.4375, 1.5000],
[0.9375, 2.7500, 3.3750],
[-0.9375, 3.2500, 0.0312],
[-3.9062, 0.9375, -3.4062],
[0.5000, -3.0312, 0.7812],
[0.6250, 1.9375, 3.0312],
[-3.2188, 1.2500, 0.0625],
[0.0312, -2.2812, -3.7812],
[3.6250, -2.1250, 1.9062]]],
[[[1.3125, 1.7188, 2.0938],
[-0.5312, -0.8438, 0.6562],
[2.2812, 2.0625, -2.3438],
[-0.8750, 3.9688, 1.0312],
[-0.6875, 0.5312, -2.1875],
[-1.0625, 3.0625, -3.3125],
[-2.6562, 3.9375, 3.7500],
[1.4688, -2.9688, 3.6875],
[2.9062, 3.6250, -2.3438]]]]]
ts2 = torch.tensor(data2, dtype=torch.bfloat16)
test.append(ts2)
ts3 = torch.empty(size=(0, 8, 3, 4, 5), dtype=torch.bool)
test.append(ts3)
test.append('}oN#Zfm')
data5 = [[-2072510500, 1644920326],
[-1561687575, 164628287],
[ -426235928, 334186982],
[ 241170383, 1814888025]]
ts5 = torch.tensor(data5,dtype=torch.int32)
test.append(ts5)
torch.nn.functional.max_unpool2d(*test)
```
### Versions
Collecting environment information...
PyTorch version: 2.8.0a0+gitcbcf677
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
Stepping: 7
CPU MHz: 2095.076
BogoMIPS: 4190.15
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 128 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] optree==0.15.0
[pip3] torch==2.8.0a0+gitcbcf677
[conda] numpy 2.2.5 pypi_0 pypi
[conda] optree 0.15.0 pypi_0 pypi
[conda] torch 2.8.0a0+gitcbcf677 dev_0 <develop>
| true
|
3,041,316,890
|
DISABLED test_comprehensive_scatter_xpu_float32 (__main__.TestInductorOpInfoXPU)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_float32'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,041,316,803
|
DISABLED test_comprehensive_gather_xpu_float32 (__main__.TestInductorOpInfoXPU)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_gather_xpu_float32'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,041,310,455
|
DISABLED test_comprehensive_gather_xpu_float64 (__main__.TestInductorOpInfoXPU)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 2
|
COLLABORATOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_gather_xpu_float64'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,041,281,802
|
Partilally revert https://github.com/pytorch/pytorch/pull/152288
|
malfet
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit",
"ci-no-td"
] | 5
|
CONTRIBUTOR
|
Summary: As it results in build failures for some internal targets that stuck on older compiler. Platform update is tracked in [T223408150](https://www.internalfb.com/tasks?t=223408150)
Test Plan: CI
Differential Revision: D74220384
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,041,276,498
|
[Set] Add correct set/frozenset __init__ behavior
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* __->__ #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
| true
|
3,041,276,419
|
[Set] Raise KeyError on empty `set.pop()`
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* __->__ #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
| true
|
3,041,276,033
|
[Set] Add `set.intersection(_update)`
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* __->__ #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
| true
|
3,041,275,968
|
[Set] Add `set.difference(_update)`
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* __->__ #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
| true
|
3,041,275,887
|
[Set] Raise TypeError if number of arguments mismatch
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* __->__ #152904
* #152988
* #152987
* #150792
* #152900
* #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,041,275,491
|
[Set] Raise `KeyError` if elem not contained in the set
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* __->__ #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
| true
|
3,041,275,409
|
[Set] Add `set.issubset` and `set.issuperset`
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* __->__ #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,041,275,310
|
[Set] Add set.symmetric_difference(_update)
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* __->__ #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,041,274,894
|
Remove `property` from python_type function
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* __->__ #152900
* #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,041,274,363
|
Run url and xref linters independently
|
shoumikhin
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Also introduce `skip-xref-lint` label
| true
|
3,041,270,608
|
DISABLED test_comprehensive_scatter_xpu_float64 (__main__.TestInductorOpInfoXPU)
|
chuanqi129
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_float64'%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,041,262,325
|
[Inductor] Set correct baseline for decomposek test
|
PaulZhang12
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Differential Revision: D74218923
Running on A100 seems to result in precision loss from decompose_k. This was root caused to the fp16/bf16 reduction setting, which establishes a less precise baseline than decompose_k, as decompose_k uses the bmm.dtype overload for fp32 output.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,041,231,864
|
Move link check jobs to pull to go with doc build
|
huydhn
|
open
|
[
"topic: not user facing",
"test-config/default"
] | 1
|
CONTRIBUTOR
|
The job is flaky atm https://github.com/pytorch/pytorch/issues/152884, it's not a good idea to keep it in a highly visible workflow like lint. I try to move it to trunk, but keeping it together with the build job in pull makes sense too given that the check also runs nightly with doc build.
| true
|
3,041,223,210
|
Use three-dot diffs in URL and xref lint workflows
|
shoumikhin
|
closed
|
[
"Merged",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Only run on the files actually modified in a PR, not every file touched on main since the branch point
Fixes #152884
| true
|
3,041,213,438
|
[SDPA] Add testing to ensure stride order exactly matches
|
drisspg
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152894
## Currently results
TODO update meta before land for mem eff
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,041,191,945
|
DISABLED test_comprehensive_nn_functional_conv3d_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_conv3d_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41677250984).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_nn_functional_conv3d_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 897, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 881, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1497, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1384, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2234, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2281, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpnpbanh83/yv/cyv4hpzbwgy3dxkz6nc2zru4ray4e5iz4cr4etekikff5yamvmn7.py", line 80, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 481, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 501, in _wait_futures
kernel = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpyvpt0hui/triton/7JYL64ECI67PRQLFOMXP6H3LYIY7236YMTVNMBBVDKZDJFPG7YUQ/triton_poi_fused_convolution_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 2: SampleInput(input=Tensor[size=(1, 1, 4, 4, 4), device="cuda:0", dtype=torch.float16], args=TensorList[Tensor[size=(1, 1, 4, 4, 4), device="cuda:0", dtype=torch.float16], Tensor[size=(1,), device="cuda:0", dtype=torch.float16]], kwargs={'stride': '(2,2,2)'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=2 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_nn_functional_conv3d_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,041,191,840
|
DISABLED test_comprehensive_sort_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 4
|
NONE
|
Platforms: inductor, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_sort_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41683610526).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 11 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_sort_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 608, in check_model
actual_grad = compute_grads(example_inputs, kwargs, actual, grads)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 406, in compute_grads
return torch.autograd.grad(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 503, in grad
result = _engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2191, in backward
return impl_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2177, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2272, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 483, in __call__
return self.compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 73, in _wrapped_bw_compiler
disable(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 857, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2244, in bw_compiler
return inner_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 729, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 124, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 847, in _compile_fx_inner
mb_compiled_graph, cache_info = FxGraphCache.load_with_key(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1405, in load_with_key
compiled_graph, cache_info = FxGraphCache._lookup_graph(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1156, in _lookup_graph
artifact_path = graph.after_deserialization(constants)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 709, in after_deserialization
code_cache = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmphwss9wyp/nm/cnmehvzoqrp4ltnc66l7jbmto24hedaj6ups5yhyess6oeknnczu.py", line 113, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 481, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 501, in _wait_futures
kernel = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3528, in result
self.static_autotuner.precompile( # type: ignore[union-attr]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmphwss9wyp/triton/BO2RLLIE2I3P32Q7T6V5MQGDCYNVX34TU54FMPQGAKLO45P6V2YA/triton_poi_fused_scatter_zeros_0.cubin')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 19: SampleInput(input=Tensor[size=(5, 5, 5), device="cuda:0", dtype=torch.float16], args=(1,False), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=19 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_sort_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,041,191,838
|
DISABLED AotInductorTest.BasicPackageLoaderTestCpu (build.bin.test_aoti_inference)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: aotinductor"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=AotInductorTest.BasicPackageLoaderTestCpu&suite=build.bin.test_aoti_inference&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41683788496).
Over the past 3 hours, it has been determined flaky in 15 workflow(s) with 30 failures and 15 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `AotInductorTest.BasicPackageLoaderTestCpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
unknown file
C++ exception with description "Error in dlopen: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /tmp/aioncA/data/aotinductor/model/c7h2yca3vsx4yicdfqxgycd7yal4vbvlzyz5zkddivrsf7pk3dij.wrapper.so)
Exception raised from DynamicLibrary at ../aten/src/ATen/DynamicLibrary.cpp:36 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x9c (0x7faf5d59211c in /var/lib/jenkins/workspace/build/lib/libc10.so)
frame #1: <unknown function> + 0xdda719 (0x7faf749da719 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #2: torch::inductor::AOTIModelContainerRunner::AOTIModelContainerRunner(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool) + 0x123 (0x7faf79a1b653 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #3: torch::inductor::AOTIModelContainerRunnerCpu::AOTIModelContainerRunnerCpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, bool) + 0x73 (0x7faf79a1d2e3 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x5e1d38d (0x7faf79a1d38d in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #5: torch::inductor::AOTIModelPackageLoader::AOTIModelPackageLoader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, unsigned long, signed char) + 0xe5a (0x7faf79a0cbda in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0x34e70 (0x5635ded58e70 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #7: torch::aot_inductor::AotInductorTest_BasicPackageLoaderTestCpu_Test::TestBody() + 0x41 (0x5635ded59381 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #8: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x51 (0x5635dedaaa81 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #9: <unknown function> + 0x768b0 (0x5635ded9a8b0 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #10: testing::TestInfo::Run() + 0x40a (0x5635ded9adca in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #11: <unknown function> + 0x7aea9 (0x5635ded9eea9 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #12: testing::internal::UnitTestImpl::RunAllTests() + 0xf28 (0x5635deda02f8 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #13: testing::UnitTest::Run() + 0x93 (0x5635deda0ac3 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #14: main + 0x104 (0x5635ded53c04 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #15: __libc_start_main + 0xf3 (0x7faf5cd3b083 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #16: _start + 0x2e (0x5635ded558de in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
" thrown in the test body.
unknown file:0: C++ failure
```
</details>
Test file path: `` or `test/run_test`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/ -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @desertfire @chenyang78 @yushangdi @benjaminglass1 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,041,191,784
|
DISABLED test_comprehensive_diagonal_copy_cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 9
|
NONE
|
Platforms: inductor, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_diagonal_copy_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41683788476).
Over the past 3 hours, it has been determined flaky in 19 workflow(s) with 19 failures and 19 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_diagonal_copy_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 897, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 881, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1497, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1384, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2234, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2281, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpd2g7fvnw/qp/cqpcozpqlxqemcbguijzcjn6ox2qfmgnzujs2g3ro7772ngy22ni.py", line 77, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 481, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 501, in _wait_futures
kernel = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmp753cocx9/triton/NO7GJHRB2W3MHJFFRZOUEC3DF6YCIEY3AWS56YOBC5IRB7ZRWSHA/triton_poi_fused_diagonal_copy_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 13: SampleInput(input=Tensor[size=(5, 5, 5), device="cuda:0", dtype=torch.float16], args=(), kwargs={'offset': '2', 'dim1': '0', 'dim2': '1'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=13 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_diagonal_copy_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,041,191,783
|
DISABLED AotInductorTest.BasicTestCpu (build.bin.test_aoti_inference)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: aotinductor"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=AotInductorTest.BasicTestCpu&suite=build.bin.test_aoti_inference&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41683788496).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 24 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `AotInductorTest.BasicTestCpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
unknown file
C++ exception with description "Error in dlopen: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /tmp/torchinductor_jenkins/cycbfqzvaesb562wvzdixz3vb5gax4oe6ys7kwovlnnc2ceecyrz/cdsq7wn66anyx7drzdhzrmx7gs47b3nhhszv3xrt7bvzf26h6qtx.wrapper.so)
Exception raised from DynamicLibrary at ../aten/src/ATen/DynamicLibrary.cpp:36 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x9c (0x7f56886e111c in /var/lib/jenkins/workspace/build/lib/libc10.so)
frame #1: <unknown function> + 0xdda719 (0x7f569fbda719 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #2: torch::inductor::AOTIModelContainerRunner::AOTIModelContainerRunner(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool) + 0x123 (0x7f56a4c1b653 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #3: torch::inductor::AOTIModelContainerRunnerCpu::AOTIModelContainerRunnerCpu(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, bool) + 0x73 (0x7f56a4c1d2e3 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x331c9 (0x55d2030d61c9 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #5: torch::aot_inductor::AotInductorTest_BasicTestCpu_Test::TestBody() + 0x43 (0x55d2030d6713 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #6: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x51 (0x55d203129a81 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #7: <unknown function> + 0x768b0 (0x55d2031198b0 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #8: testing::TestInfo::Run() + 0x40a (0x55d203119dca in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #9: <unknown function> + 0x7aea9 (0x55d20311dea9 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #10: testing::internal::UnitTestImpl::RunAllTests() + 0xf28 (0x55d20311f2f8 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #11: testing::UnitTest::Run() + 0x93 (0x55d20311fac3 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #12: main + 0x104 (0x55d2030d2c04 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #13: __libc_start_main + 0xf3 (0x7f5687e8a083 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #14: _start + 0x2e (0x55d2030d48de in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
" thrown in the test body.
unknown file:0: C++ failure
```
</details>
Test file path: `` or `test/run_test`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/ -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @desertfire @chenyang78 @yushangdi @benjaminglass1 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,041,191,723
|
DISABLED AotInductorTest.BasicTestCuda (build.bin.test_aoti_inference)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: aotinductor"
] | 3
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=AotInductorTest.BasicTestCuda&suite=build.bin.test_aoti_inference&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41683610518).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 22 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `AotInductorTest.BasicTestCuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
unknown file
C++ exception with description "Error in dlopen: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /tmp/torchinductor_jenkins/cycbfqzvaesb562wvzdixz3vb5gax4oe6ys7kwovlnnc2ceecyrz/cg3lzvlvcuugtygg5rm7epgdxgoeritrbg7b5ufyto5npb4wt3pt.wrapper.so)
Exception raised from DynamicLibrary at ../aten/src/ATen/DynamicLibrary.cpp:36 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x9c (0x7f142f6e111c in /var/lib/jenkins/workspace/build/lib/libc10.so)
frame #1: <unknown function> + 0xdda719 (0x7f1446bda719 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #2: torch::inductor::AOTIModelContainerRunner::AOTIModelContainerRunner(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool) + 0x123 (0x7f144bc1b653 in /var/lib/jenkins/workspace/build/lib/libtorch_cpu.so)
frame #3: torch::inductor::AOTIModelContainerRunnerCuda::AOTIModelContainerRunnerCuda(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool) + 0x11 (0x7f1433929811 in /var/lib/jenkins/workspace/build/lib/libtorch_cuda.so)
frame #4: std::_MakeUniq<torch::inductor::AOTIModelContainerRunnerCuda>::__single_object std::make_unique<torch::inductor::AOTIModelContainerRunnerCuda, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x95 (0x555ab9e244f5 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #5: <unknown function> + 0x333eb (0x555ab9e133eb in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #6: torch::aot_inductor::AotInductorTest_BasicTestCuda_Test::TestBody() + 0x46 (0x555ab9e13796 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #7: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x51 (0x555ab9e66a81 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #8: <unknown function> + 0x768b0 (0x555ab9e568b0 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #9: testing::TestInfo::Run() + 0x40a (0x555ab9e56dca in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #10: <unknown function> + 0x7aea9 (0x555ab9e5aea9 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #11: testing::internal::UnitTestImpl::RunAllTests() + 0xf28 (0x555ab9e5c2f8 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #12: testing::UnitTest::Run() + 0x93 (0x555ab9e5cac3 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #13: main + 0x104 (0x555ab9e0fc04 in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
frame #14: __libc_start_main + 0xf3 (0x7f142ee8a083 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #15: _start + 0x2e (0x555ab9e118de in /var/lib/jenkins/workspace/build/bin/test_aoti_inference)
" thrown in the test body.
unknown file:0: C++ failure
```
</details>
Test file path: `` or `test/run_test`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/ -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @desertfire @chenyang78 @yushangdi @benjaminglass1 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
3,041,176,026
|
[inductor] cudagraph error for individually compiled transformer blocks
|
StrongerXi
|
open
|
[
"triage review",
"module: cuda graphs",
"oncall: pt2",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This was first observed in https://github.com/pytorch/pytorch/issues/150706#issuecomment-2846155656.
Note that if we uncomment the `# Pass` region, the error goes away.
```python
import torch
def f(x):
return x + 1
f = torch.compile(f, mode="reduce-overhead")
# Pass
#x = torch.ones(2, device='cuda')
#xx = torch.ones(2, device='cuda')
#y = f(x)
#z = f(xx)
# Fail
x = torch.ones(2, device='cuda')
y = f(x)
z = f(y)
```
### Error logs
```
Traceback (most recent call last):
File "/home/ryanguo99/scratch/cudagraph.py", line 17, in <module>
z = f(y)
^^^^
File "/home/ryanguo99/repos/pytorch/torch/_dynamo/eval_frame.py", line 678, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/scratch/cudagraph.py", line 3, in f
def f(x):
File "/home/ryanguo99/repos/pytorch/torch/_dynamo/eval_frame.py", line 872, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_functorch/aot_autograd.py", line 1221, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 338, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 502, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_inductor/output_code.py", line 584, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_inductor/compile_fx.py", line 1572, in run
return compiled_fn(new_inputs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_inductor/cudagraph_trees.py", line 371, in deferred_cudagraphify
return fn(inputs)
^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_inductor/utils.py", line 2570, in run
return model(new_inputs)
^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_inductor/cudagraph_trees.py", line 1992, in run
out = self._run(new_inputs, function_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_inductor/cudagraph_trees.py", line 2162, in _run
return self.record_function(new_inputs, function_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_inductor/cudagraph_trees.py", line 2196, in record_function
node = CUDAGraphNode(
^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_inductor/cudagraph_trees.py", line 950, in __init__
recording_inputs = self._allocate_and_copy_recording_inputs(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/repos/pytorch/torch/_inductor/cudagraph_trees.py", line 1666, in _allocate_and_copy_recording_inputs
self._copy_inputs_and_remove_from_src(recording_inputs, inputs)
File "/home/ryanguo99/repos/pytorch/torch/_inductor/cudagraph_trees.py", line 1050, in _copy_inputs_and_remove_from_src
torch._foreach_copy_(dst_tensors, src_tensors)
File "/home/ryanguo99/repos/pytorch/torch/utils/_device.py", line 100, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File "/home/ryanguo99/scratch/cudagraph.py", line 31, in f
return x + 1. To prevent overwriting, clone the tensor outside of torch.compile() or call torch.compiler.cudagraph_mark_step_begin() before each model invocation.
```
### Versions
main aafe8a67b5c, python 3.11
cc @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
3,041,158,226
|
[ez] fix a bunch of typos in dynamo
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152886
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,041,153,877
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
3,041,138,252
|
UNSTABLE Lint / Link checks / Lint URLs / linux-job
|
huydhn
|
open
|
[
"module: ci",
"triaged",
"unstable"
] | 3
|
CONTRIBUTOR
|
There is a number of complains about the flakiness of this job. So we need to figure out if this can be improved:
* Implementing retry
* Add an allow list?
* Check only links from git diff instead of all the content from the changed files
cc @seemethere @malfet @pytorch/pytorch-dev-infra @shoumikhin
| true
|
3,041,128,821
|
[dynamo] Fix bug in hasattr(tensor, "size")
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #153105
* __->__ #152883
Fixes https://github.com/pytorch/pytorch/issues/135696
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,041,101,553
|
Devcontainer: Optimize apt-get commands to reduce Docker image size
|
wdvr
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
## Summary
- Added --no-install-recommends flag to all apt-get install commands to reduce unnecessary dependencies
- Added apt-get clean after package installations to remove package cache and reduce image size
- Combined multiple apt commands into single instructions to reduce Docker image layers
## Test plan
Test by building the devcontainer and verifying functionality while ensuring reduced image size
| true
|
3,041,101,246
|
Devcontainer: Replace conda with apt-based setup
|
wdvr
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
## Summary
- Replaced miniconda base image with base Ubuntu 22.04 image
- Installed Python and required dependencies using apt
- Replaced conda-based CUDA installation with apt-based version
- Updated paths in install-dev-tools.sh to reflect the new non-conda environment
- Removed conda-specific files and added requirements.txt for Python dependencies
## Test plan
Test by building and running the devcontainer in VS Code with both CPU and CUDA configurations
| true
|
3,041,100,086
|
Devcontainer: Fix context path and workspace mount
|
wdvr
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
## Summary
- Changed the devcontainer context path from '../..' to './' for both CPU and CUDA configurations
- Added workspace mount configuration to properly mount the repository in the container
- Added containerEnv to disable implicit --user pip flag
## Test plan
Test by building and running the devcontainer in VS Code
| true
|
3,041,078,752
|
xpu: support custom ops with torch.library on xpu backend
|
dvrogozh
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"keep-going",
"ciflow/xpu",
"release notes: xpu"
] | 4
|
CONTRIBUTOR
|
Fixes: https://github.com/intel/torch-xpu-ops/issues/1626
This PR started enabling of tests for `torch.library`, but more work is needed. Tests are using `torch._custom_ops` deprecated API planned for removal at pytorch 2.6 (not done). I think cleanup of pytorch would be nice before enabling more tests for xpu.
https://github.com/pytorch/pytorch/blob/a2ccda3c605bc37457a465711cee40775df54d2e/torch/_custom_op/impl.py#L47
CC: @EikanWang
| true
|
3,041,025,685
|
[Graph Partition][Flex Attention] analyze symints from subgraph inputs and outputs
|
BoyuanFeng
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Flex Attention may have symints in subgraph inputs and outputs. Existing code implicitly captures these symints but does not explicitly store it in TritonTemplateBuffer. This leads to error when analyzing symints used in Flex Attention as a TritonTemplateBuffer. This PR fixes the issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,041,020,615
|
Motivate Pytorch's forward mode AD APIs with training examples
|
redwrasse
|
open
|
[
"module: docs",
"module: autograd",
"triaged",
"module: forward ad"
] | 0
|
CONTRIBUTOR
|
### 📚 The doc issue
Pytorch documentation of forward AD (https://pytorch.org/tutorials/intermediate/forward_ad_usage.html) might benefit from training examples for the uninitiated user. The current documentation explains how to compute JVPs, though without context.
Adding training examples may also add community cycles to testing and tweaking the forward AD support (*_jvp operators) Pytorch has invested in: https://github.com/pytorch/pytorch/issues/71117
Textbook context, for example: (Chapter 6 of Deep Learning (Goodfellow, Bengio, Courville)): "_when the number of outputs of the graph is larger than the number of inputs, it is sometimes preferable to use another form of automatic differentiation called forward mode accumulation. Forward mode accumulation has been proposed for obtaining real-time computation of gradients in recurrent networks, for example (William and Zipserr, 1989). This approach also avoids the need to store the values and gradients for the whole graph, trading off computational efficiency for memory."_
### Suggest a potential alternative/fix
Something that illustrates training, with computation and memory tradeoffs against the equivalent architecture via backprop, would be useful.
Stuck this prompt into Google Gemini:
```
""Code me up a Pytorch code sample of training an RNN using forward mode AD, following the ideas of the Williams and Zipser paper here:
# https://web.stanford.edu/class/psych209a/ReadingsByDate/02_25/Williams%20Zipser95RecNets.pdf"
```
which returned syntactically incorrect code, but nonetheless illustrates the idea of training with forward AD:
```
# Auto-generated by Google Gemini:
# "Code me up a Pytorch code sample of training an RNN using forward mode AD, following the ideas of the Williams and Zipser paper here:
# https://web.stanford.edu/class/psych209a/ReadingsByDate/02_25/Williams%20Zipser95RecNets.pdf"
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd.functional import jvp
# --- Hyperparameters ---
input_size = 10
hidden_size = 5
output_size = 1
seq_len = 5
batch_size = 2
learning_rate = 0.01
num_epochs = 1
# --- Define the RNN Model ---
class SimpleRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(SimpleRNN, self).__init__()
self.rnn = nn.RNNCell(input_size, hidden_size)
self.linear = nn.Linear(hidden_size, output_size)
def forward(self, input_, hidden):
hidden = self.rnn(input_, hidden)
output = self.linear(hidden)
return output, hidden
def init_hidden(self, batch_size):
return torch.zeros(batch_size, hidden_size)
# --- Generate Synthetic Data ---
def generate_data(batch_size, seq_len, input_size, output_size):
inputs = torch.randn(seq_len, batch_size, input_size)
targets = torch.randn(seq_len, batch_size, output_size)
return inputs, targets
inputs, targets = generate_data(batch_size, seq_len, input_size, output_size)
# --- Instantiate the Model and Optimizer ---
model = SimpleRNN(input_size, hidden_size, output_size)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
criterion = nn.MSELoss()
# --- Forward Mode AD Training Loop (Inspired by Williams & Zipser) ---
for epoch in range(num_epochs):
hidden = model.init_hidden(batch_size)
total_loss = 0
for t in range(seq_len):
input_t = inputs[t]
target_t = targets[t]
# --- Define the forward pass function for JVP ---
def forward_pass(params):
# Unpack parameters (order matters and should match model.parameters())
(weight_ih, weight_hh, bias_ih, bias_hh, weight_ho, bias_ho) = params
# Initialize hidden state (we need to do this inside for independent JVP)
current_hidden = torch.zeros(batch_size, hidden_size)
# Perform forward pass up to the current time step
for i in range(t + 1):
input_step = inputs[i]
current_hidden = torch.tanh(torch.matmul(input_step, weight_ih.T) + bias_ih +
torch.matmul(current_hidden, weight_hh.T) + bias_hh)
output = torch.matmul(current_hidden, weight_ho.T) + bias_ho
loss = criterion(output, target_t)
return loss
# --- Define tangent vectors for each parameter (for a single example in the batch) ---
tangents = tuple(torch.randn_like(p) for p in model.parameters())
# --- Compute JVP for each example in the batch ---
jvp_loss_per_example = []
for b in range(batch_size):
def forward_pass_single(params):
(weight_ih, weight_hh, bias_ih, bias_hh, weight_ho, bias_ho) = params
current_hidden = torch.zeros(1, hidden_size)
for i in range(t + 1):
input_step = inputs[i][b].unsqueeze(0)
current_hidden = torch.tanh(torch.matmul(input_step, weight_ih.T) + bias_ih +
torch.matmul(current_hidden, weight_hh.T) + bias_hh)
output = torch.matmul(current_hidden, weight_ho.T) + bias_ho
loss_single = criterion(output, target_t[b].unsqueeze(0))
return loss_single
(loss_output_single, jvp_output_single) = jvp(forward_pass_single, tuple(model.parameters()), tangents)
jvp_loss_per_example.append(jvp_output_single)
# --- Aggregate JVP over the batch (simple sum for this example) ---
aggregated_jvp = tuple(torch.stack([jvp[i] for jvp in jvp_loss_per_example]).mean(dim=0) for i in range(len(tangents)))
# --- "Update" parameters based on the JVP (simplified SGD-like update) ---
with torch.no_grad():
for param, grad_v in zip(model.parameters(), aggregated_jvp):
param.sub_(learning_rate * grad_v)
# --- Standard forward pass for loss tracking ---
output, hidden = model(input_t, hidden)
loss = criterion(output, target_t)
total_loss += loss.item()
print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {total_loss / seq_len}')
print("Training finished (one epoch).")
```
cc @svekars @sekyondaMeta @AlannaBurke @ezyang @albanD @gqchen @nikitaved @soulitzer @Varal7 @xmfan
| true
|
3,041,011,097
|
Move additional MPS Unary ops to Iterator
|
skotapati
|
closed
|
[
"open source",
"Merged",
"release notes: mps",
"ciflow/mps",
"keep-going"
] | 8
|
COLLABORATOR
|
Noticed some of these ops were contributing to a big chunk of the runtime for OpenLLama as well as a few other benchmarks
At the op level, moving to a TensorIterator-based Metal kernel gives a 20x speedup. Will migrate the inverse trigonometric functions & log ops in a follow-up PR, as this one is already a bit large
| true
|
3,040,995,451
|
Incorporate CUDA Memory Trimming Into DeviceCachingAllocator
|
NianZo
|
open
|
[
"triage review",
"module: windows"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
On Windows, WDDM will oversubscribe GPU device memory causing some allocations to be put into shared CPU memory. This then can cause application slowdowns as slower memory is used. CUDA provides a notification of when this is about to start via the [cudaDeviceRegisterAsyncNotification](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__DEVICE.html#group__CUDART__DEVICE_1gcff9794f21aa34d3b5ccc5b6b245da4a) API.
Registering a callback for this notification in the constructor of DeviceCachingAllocator, which can then call garbage_collect_cached_blocks, should help to reduce memory pressure when needed and avoid oversubscriptions to system memory.
Here is a pseudocode example of how this feature could be integrated:
```
DeviceCachingAllocator
{
...
// This callback handle is an opaque object that can be used to unregister the notification via
// cudaDeviceUnregisterAsyncNotification and to identify which callback registration a given
// notification corresponds to.
cudaAsyncCallbackHandle_t callback;
...
DeviceCachingAllocator() {
...
cudaDeviceRegisterAsyncNotification(cudaDevice, trimCacheCallback, (void*)context_recorder_, &callback);
}
static void trimCacheCallback(cudaAsyncNotificationInfo_t* notificationInfo, void* userData, cudaAsyncCallbackHandle_t callback)
{
std::atomic<CreateContextFn> context_recorder = (std::atomic<CreateContextFn>)userData;
auto context = context_recorder_.load();
// Must check the type before accessing the info member of cudaAsyncNotificationInfo_t.
// Otherwise, we could misinterpret notificationInfo->info if a different type of
// notification is sent.
if (notificationInfo->type == cudaAsyncNotificationTypeOverBudget) {
garbage_collect_cached_blocks(context);
}
}
}
```
### Alternatives
_No response_
### Additional context
_No response_
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
3,040,984,210
|
Clarify wrap_triton doc about optional triton_op usage
|
swapnita1205
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
NONE
|
This PR updates the docstring for `wrap_triton` to clarify that using `torch.library.triton_op` is optional, and only necessary if dispatch interposition is required. This addresses confusion raised in #152870.
| true
|
3,040,974,447
|
[nativert] Port string join and split to c10/util
|
yiming0416
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Summary:
Torch Native Runtime RFC: https://github.com/pytorch/rfcs/pull/72
Port string utils functions join and split to c10/util
Test Plan:
Added tests in `string_util_test.cpp`
buck2 run mode/opt caffe2/c10/test:util_base_tests
Differential Revision: D74202473
| true
|
3,040,919,999
|
[Dynamo] Guard serialization for RANGE_ITERATOR_MATCH
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152729
* #152961
* __->__ #152872
* #152865
* #152730
* #152728
* #152727
* #152725
Tests serialization for RANGE_ITERATOR_MATCH; includes no non-test changes.
This PR handles iterator exhaustion issues by utilizing the janky solution from #152865; it passes a function to generate kwargs and `frame_state.f_locals` is updated with fresh iterators through a second kwarg generation pass after initial tracing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,040,918,357
|
Fix bug visualizing 1D Tensor using rich
|
wangkuiyi
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"release notes: distributed (dtensor)"
] | 5
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/152848
I didn't fix the bug earlier because the example script didn't exhaustively present all combinations of 1D/2D tensor, 1D/2D mesh, and all possible sharding specs. Therefore, in this PR, I enriched the example script to cover all possible combinations.
<img width="1008" alt="f" src="https://github.com/user-attachments/assets/1745a804-a004-4f98-8332-d7498453f397" />
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,040,899,992
|
Docs Update `wrap_triton`
|
drisspg
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: user triton"
] | 0
|
CONTRIBUTOR
|
# Summary
https://github.com/pytorch/pytorch/blob/bb9c4260249ea0c57e87395eff5271fb479efb6a/torch/_library/triton.py#L210
I found this line confusing since you only need to register as custom op if you want to interpose at dispatch level. I dont really know what the best wording is though
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @oulgen @aakhundov @davidberard98
| true
|
3,040,897,462
|
DISABLED testAssertNotRegex (__main__.CPythonTest_Assertions)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 6
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=testAssertNotRegex&suite=CPythonTest_Assertions&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41672382719).
Over the past 3 hours, it has been determined flaky in 99 workflow(s) with 198 failures and 99 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `testAssertNotRegex`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/test_case.py", line 181, in setUpClass
raise unittest.TestCase.failureException(
f"Test file {inspect.getfile(cls)} does not contain a valid Python version"
)
AssertionError: Test file /var/lib/jenkins/workspace/test/dynamo/test_unittest.py does not contain a valid Python version
```
</details>
Test file path: `dynamo/test_unittest.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,040,893,229
|
Have WrapTriton work w/ `TRITON_INTERPRET=1` in eager
|
drisspg
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: user triton"
] | 1
|
CONTRIBUTOR
|
# Summary
Repro:
``` Py
import torch
import triton
import triton.language as tl
from torch.library import wrap_triton
@triton.jit
def add_kernel(
in_ptr0,
in_ptr1,
out_ptr,
n_elements,
BLOCK_SIZE: "tl.constexpr",
):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(in_ptr0 + offsets, mask=mask)
y = tl.load(in_ptr1 + offsets, mask=mask)
output = x + y
tl.store(out_ptr + offsets, output, mask=mask)
def add(x, y):
output = torch.empty_like(x)
n_elements = output.numel()
def grid_fn(meta):
return (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),)
wrap_triton(add_kernel)[grid_fn](x, y, output, n_elements, 16)
return output
a = torch.randn(1024, device="cuda")
print(add(a, a))
```
Run
```Shell
TRITON_INTERPRET=1 python misc/wrapper_bug.py
```
Produces
```Shell
❯ TRITON_INTERPRET=1 python misc/wrapper_bug.py
Traceback (most recent call last):
File "/home/drisspg/meta/scripts/misc/wrapper_bug.py", line 38, in <module>
print(add(a, a))
^^^^^^^^^
File "/home/drisspg/meta/scripts/misc/wrapper_bug.py", line 32, in add
wrap_triton(add_kernel)[grid_fn](x, y, output, n_elements, 16)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/drisspg/.conda/envs/nightly/lib/python3.12/site-packages/torch/_library/triton.py", line 269, in wrap_triton
raise RuntimeError(
RuntimeError: wrap_triton only works on functions annotated with triton.jit or triton.autotune
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @oulgen @aakhundov @davidberard98
| true
|
3,040,889,116
|
[dynamo] Improve final traceback frame format
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"dynamo-triage-jan2025",
"module: compile ux"
] | 0
|
MEMBER
|
For this sample program:
```python
import torch
@torch.compile(backend="eager", fullgraph=True)
def fn(x):
x = x + 1
torch._dynamo.graph_break()
return x + 1
fn(torch.ones(3))
```
The error message looks like
```
Traceback (most recent call last):
File "/data/users/williamwen/pytorch/playground2.py", line 11, in <module>
fn(torch.ones(3))
File "/data/users/williamwen/pytorch/torch/_dynamo/eval_frame.py", line 671, in _fn
raise e.with_traceback(None) from e.__cause__
torch._dynamo.exc.Unsupported: Call to `torch._dynamo.graph_break()`
Explanation: User-inserted graph break. Message: None
Hint: Remove the `torch._dynamo.graph_break()` call.
Developer debug context: Called `torch._dynamo.graph_break()` with args `[]`, kwargs `{}`
from user code:
File "/data/users/williamwen/pytorch/playground2.py", line 7, in fn
torch._dynamo.graph_break()
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
The final traceback frame
```
File "/data/users/williamwen/pytorch/torch/_dynamo/eval_frame.py", line 671, in _fn
raise e.with_traceback(None) from e.__cause__
```
is not very elegant here (What is eval_frame.py? What is _fn?). We should make it more clear in the traceback that torch.compile attempted to trace this function.
Ideally, the final traceback frame should be something like:
```
File "/data/users/williamwen/pytorch/torch/_dynamo/wrapper.py", line 671, in compile_wrapper
return _compile(fn, args, kwargs)
```
cc @chauhang @penguinwu
| true
|
3,040,881,651
|
inductor-periodic rocm tests failing since at least 4/10
|
zou3519
|
open
|
[
"high priority",
"triage review",
"module: rocm"
] | 1
|
CONTRIBUTOR
|
Example failure: https://hud.pytorch.org/pytorch/pytorch/commit/50fe1b234926b775f37bcc924e073886330c3a4d#41668981068-box

cc @ezyang @gchanan @kadeng @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,040,877,195
|
[Dynamo] Guard serialization for TUPLE_ITERATOR_LEN
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152729
* #152961
* #152872
* __->__ #152865
* #152730
* #152728
* #152727
* #152725
Tests serialization for TUPLE_ITERATOR_LEN; includes no non-test changes.
Passing a tuple iterator as input results in the iterator being exhausted during testing. I threw together a super janky workaround via accepting a func for kwarg generation and replacing `frame_state.f_locals` with newly-generated kwargs to get fresh iterators, but insights into a better approach are welcome!
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,040,868,342
|
[Graph Partition] remove PRECOMPUTED_SIZE from partition symbol inputs
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
PRECOMPUTED_SIZE is computed during runtime and should not be included in graph_partition_inputs. See the following example for a PRECOMPUTED_SIZE `ps0`.

full output code: [P1803820480](https://www.internalfb.com/phabricator/paste/view/P1803820480)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,856,116
|
[Graph Partition] remove weak dep from `partition_input_names`
|
BoyuanFeng
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Graph partition analyzes read_writes to get partition input names. However, weak dep is fake dependency and is not actually read or written. So we should not include weak dep in graph partition input names.
The following test failure is fixed by removing weak dependency from partition_input_names:
`PYTORCH_TEST_WITH_INDUCTOR=1 python test/test_torch.py TestTorchDeviceTypeCUDA.test_params_invalidated_with_grads_invalidated_between_unscale_and_step_Adam_cuda_float32`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,834,864
|
[Memory Viz] Add Compile Context to Visualizer
|
sraikund16
|
closed
|
[
"enhancement",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler"
] | 5
|
CONTRIBUTOR
|
Summary: Adds PT2 info to visualizer. Also makes sure we have a case when compile context is not in pickle file.
Test Plan: {F1977637362}
Differential Revision: D74202811
| true
|
3,040,791,447
|
torch.cuda.use_mem_pool is not thread safe
|
syed-ahmed
|
open
|
[
"module: cuda",
"triaged",
"module: CUDACachingAllocator"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
@ngimel reported this offline. We didn't check thread safety of the `use_mem_pool` API. Even though `use_mem_pool` creates thread-local `MemPoolContext`, it then just calls `_cuda_beginAllocateToPool(device_index, pool.id)` with trivially true filter, so all subsequent allocations from all threads can go to the pool until `_cuda_endAllocateCurrentStreamToPool` is called. This is because, `captures_underway` in CUDACachingAllocator is not thread-local:
```
if (C10_UNLIKELY(!captures_underway.empty())) {
for (auto& entry : captures_underway) {
if (entry.second(stream)) {
auto it1 = graph_pools.find(entry.first);
TORCH_INTERNAL_ASSERT(it1 != graph_pools.end());
if (size <= kSmallSize) {
return it1->second->small_blocks;
} else {
return it1->second->large_blocks;
}
}
}
}
```
and won't be empty for all threads. `entry.second(stream)` is always true when using the `use_mem_pool` API, and hence all allocations from other threads will go to the first entry of the `captures_underway`.
The failure scenario is one thread calling `use_mem_pool` to allocate to pool, and another thread that happens to request allocation at the same time, also gets allocation from the pool when it shouldn't. Note that for the failure to happen the pool should have free blocks, if it doesn't then that second thread will attempt to create a new allocation and at that point `MemPoolCtx` being thread local will prevent it from creating a new allocation in the pool. But if there are free blocks in the pool, that second thread will just get them, setting up a nasty sequence which will eventually lead to OOM.
We will write the above scenario and add it as a test. https://github.com/pytorch/pytorch/pull/152472 has likely fixed it and we will need to use `_cuda_beginAllocateCurrentThreadToPool` in the `use_mem_pool`.
We are also investigating the following simpler test case, which shows a runtime failure in the cleanup logic:
```python
import torch
import threading
def create_mempool():
pool = torch.cuda.MemPool()
with torch.cuda.use_mem_pool(pool):
a_ten = torch.randn(1024, device="cuda")
num_threads = 4
threads = [
threading.Thread(target=create_mempool)
for t in range(num_threads)
]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
```
Error:
```
terminate called after throwing an instance of 'c10::Error'
what(): captures_underway.empty() INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/CUDACachingAllocator.cpp":3091, please report a bug to PyTorch.
Exception raised from synchronize_and_free_events at /opt/pytorch/pytorch/c10/cuda/CUDACachingAllocator.cpp:3091 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x88 (0x7f20c94ba568 in /usr/local/lib/python3.12/dist-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x59 (0x7f20c94562e5 in /usr/local/lib/python3.12/dist-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x37542 (0x7f20c955d542 in /usr/local/lib/python3.12/dist-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x37712 (0x7f20c955d712 in /usr/local/lib/python3.12/dist-packages/torch/lib/libc10_cuda.so)
frame #4: c10::cuda::MemPool::~MemPool() + 0x1b2 (0x7f20c9544742 in /usr/local/lib/python3.12/dist-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0xbf619a (0x7f212d6c419a in /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0x3824d0 (0x7f212ce504d0 in /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x382b31 (0x7f212ce50b31 in /usr/local/lib/python3.12/dist-packages/torch/lib/libtorch_python.so)
frame #8: python() [0x59f2a3]
frame #9: python() [0x5f7bc5]
frame #10: python() [0x5e3574]
<omitting python frames>
frame #12: python() [0x54cd32]
frame #13: python() [0x6f826c]
frame #14: python() [0x6b917c]
frame #15: <unknown function> + 0x9caa4 (0x7f21396ebaa4 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #16: __clone + 0x44 (0x7f2139778a34 in /usr/lib/x86_64-linux-gnu/libc.so.6)
Aborted (core dumped)
```
### Versions
2.8.0a0+5228986c39
cc @ptrblck @msaroufim @eqy @jerryzh168
| true
|
3,040,762,228
|
[ez] Use pip instead of conda in run_tests.sh
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"test-config/default"
] | 3
|
CONTRIBUTOR
|
Part 1 of https://github.com/pytorch/pytorch/issues/148336. The rest depends on https://github.com/pytorch/pytorch/issues/148335 to remove conda from Docker build process.
| true
|
3,040,740,348
|
Add torch._C.Tag.needs_contiguous_strides
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152859
this forces inductor to force the inputs to be contiguous.
Test Plan:
- new test
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,713,224
|
[inductor][retry] Realize bucketize/searchsorted output
|
davidberard98
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152858
**Context**:
bucketize is relatively expensive, computationally. So it's not always profitable to fuse it if it means doing extra computation. For example, this repro:
https://gist.github.com/davidberard98/7fd6af7e6291787c246c705945a25554
shows a slowdown from 56us (eager) to ~100us (torch.compile-d): instead of computing 2\*\*15 binary searches, the fused version does 2\*\*15 * 384 - one for each of the broadcasted outputs.
**Solution**:
Realize the output of bucketize (and searchsorted, which also uses inductor's ops.bucketize). If there's an opportunity to do non-broadcasted fusions, the scheduler can still apply such fusions later on.
After this PR, instead of a slowdown, we see an improvement from 56us (eager) to 33us (compiled).
**Retry**
Original PR (https://github.com/pytorch/pytorch/pull/152644) was reverted due to internal bisect blaming this change, but the bisect was a false positive (and is marked as such)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,703,005
|
Synchronize in foreach tests after profiling
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
After the CI change from 12.4 -> 12.6 around mid-March, the foreach tests have been flaky and hard to repro due to nondeterminism. Per @davidberard98's suggestion, let's try to add a synchronize before checking profiler results to see whether this fixes the flake! The hope is that the 48 currently open foreach flaky issues will close from this change.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152857
| true
|
3,040,673,967
|
Fix HF loading when there's no metadata file to work with fsspec
|
ankitageorge
|
open
|
[
"oncall: distributed",
"fb-exported",
"topic: not user facing",
"release notes: distributed (checkpoint)"
] | 4
|
CONTRIBUTOR
|
Summary: HF loading when there is no metadata is an edge case for some users. We were previously calling safe_open(filename) to get the keys in the safetensors file, but this doesn't work with fsspec, when models have a different backend than local fs (ie. hf, s3 etc). This diff updates to open the file with fsspec.open() and then safetensors.deserialize() to get the keys
Test Plan: unit test and e2e test reading from hf
Differential Revision: D74181513
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,040,639,072
|
[dynamo][ez] Remove unused guard OBJECT_MUTATION.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary: seems not used anywhere https://www.internalfb.com/code/search?q=case%3Ayes%20filepath%3Acaffe2%20OBJECT_MUTATION
Test Plan: CI
Differential Revision: D74196559
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,040,551,646
|
torch.multinomial is not deterministic for large number of input probabilities when replacement=True
|
szvsw
|
closed
|
[] | 4
|
NONE
|
### 🐛 Describe the bug
Hello,
I think there is a bug in `torch.multinomial` which sporadically appears when the number of input probabilities is large (e.g. greater than 10k, but gets worse as number gets larger) and `replacement=True`. I think it is also worse when the dynamic range of the probabilities is very high - i.e. a lot very very close to zero. I would love if this is user error, but I'm pretty sure it is not.
My guess is that it is some sort of numerical / floating point error?
What happens is that sometimes, the nth row of the returned indices is off by 1. e.g. in one trial, the 57th row out of 1000 might be index `57291` but in another trial it might end up as `57292` or `57290` (but never something far away like `1272`).
Here is a reproducible - nb: we are on `torch==2.5.1+cu118`. If this is fixed in another version, apologies.
```
import torch
import torch.nn.functional as F
def run_test(
device: torch.device,
n_candidates: int,
n_draws: int,
n_trials: int,
replacement: bool,
band_width: float,
band_offset: float,
generator: torch.Generator,
):
all_results = []
for _ in range(n_trials):
generator.manual_seed(42)
# Stand-in for some process which randomly generates logits.
# e.g. randomly sampling designs and running them through a classifier.
logits = (
torch.rand(n_candidates, device=device, generator=generator) * band_width
- band_offset
)
# convert logits to probabilities
probs = F.softmax(logits, dim=-1)
# draw samples
results = torch.multinomial(
probs,
n_draws,
replacement=replacement,
generator=generator,
)
all_results.append(results.unsqueeze(1))
# combine the samples, shape is (n_draws, n_trials)
results = torch.cat(all_results, dim=1)
errors = []
for nth_draw in range(n_draws):
# get the nth draw across all trials
nth_draw_across_trials = results[nth_draw]
unique, counts = torch.unique(nth_draw_across_trials, return_counts=True)
# We expect each draw to be identical across all trials.
# which would mean a single unique index was selected across all trials.
if len(unique) > 1:
errors.append((nth_draw, unique))
message = (
f"There are {len(errors)} row(s) in the output that are not identical across all {n_trials} trials."
" The following samples are misaligned:"
)
if len(errors) > 0:
print(message)
for nth_draw, unique in errors:
print(f"Draw {nth_draw} was not identical across all trials. {len(unique)} different indices were sampled across {n_trials} trials: {unique.cpu().numpy().tolist()}")
else:
print("All samples are identical across all trials.")
device = torch.device("cuda")
generator = torch.Generator(device)
band_width = 5
band_offset = -5
n_candidates = 1_000_000
n_draws = 1000
n_trials = 1000
replacement = True
run_test(
device=device,
n_candidates=n_candidates,
n_draws=n_draws,
n_trials=n_trials,
replacement=replacement,
band_width=band_width,
band_offset=band_offset,
generator=generator,
)
```
Example output (will change across runs):
```
There are 11 row(s) in the output that are not identical across all 1000 trials. The following samples are misaligned:
Draw 65 was not identical across all trials. 2 different indices were sampled across 1000 trials: [197680, 197681]
Draw 111 was not identical across all trials. 2 different indices were sampled across 1000 trials: [128305, 128306]
Draw 115 was not identical across all trials. 2 different indices were sampled across 1000 trials: [181477, 181478]
Draw 306 was not identical across all trials. 2 different indices were sampled across 1000 trials: [66297, 66298]
Draw 465 was not identical across all trials. 2 different indices were sampled across 1000 trials: [131393, 131394]
Draw 550 was not identical across all trials. 2 different indices were sampled across 1000 trials: [90228, 90229]
Draw 585 was not identical across all trials. 2 different indices were sampled across 1000 trials: [129377, 129378]
Draw 802 was not identical across all trials. 2 different indices were sampled across 1000 trials: [154692, 154693]
Draw 846 was not identical across all trials. 2 different indices were sampled across 1000 trials: [175321, 175322]
Draw 856 was not identical across all trials. 2 different indices were sampled across 1000 trials: [135070, 135071]
Draw 954 was not identical across all trials. 2 different indices were sampled across 1000 trials: [184852, 184853]
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise (10.0.22621 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22621-SP0
Is CUDA available: True
CUDA runtime version: 10.0.130
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A5500
Nvidia driver version: 531.14
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i7-12700
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2100
MaxClockSpeed: 2100
L2CacheSize: 12288
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] pytorch-lightning==2.5.1
[pip3] tinynumpy==1.2.1
[pip3] torch==2.5.1+cu118
[pip3] torchmetrics==1.7.1
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 haa95532_640
[conda] mkl-service 2.4.0 py310h2bbff1b_0
[conda] mkl_fft 1.3.1 py310ha0764ea_0
[conda] mkl_random 1.2.2 py310h4ed8f06_0
[conda] numpy 1.23.5 py310h60c9a35_0
[conda] numpy-base 1.23.5 py310h04254f7_0
[conda] numpydoc 1.5.0 py310haa95532_0
[conda] pytorch 1.12.1 cpu_py310h5e1f01c_1
(base) PS E:\repos\path\to\my\repo\censored\for\privacy>
| true
|
3,040,540,053
|
[dynamo] Recursively realize the stack_values
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td",
"ciflow/pull"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152883
* __->__ #152853
Might also fix - https://github.com/pytorch/pytorch/issues/135696
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,040,518,349
|
[nativert] Move AutoTimer to c10.
|
qxy11
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary:
Torch Native Runtime RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md
To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed.
This diff adds a small library utility implementing an auto timer to measure execution time of certain regions for Torch Native Runtime.
Test Plan: c10/test/util/AutoTimer_test
Differential Revision: D74186488
| true
|
3,040,505,193
|
DISABLED test_comprehensive___rmul___cuda_float16 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive___rmul___cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41657320849).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive___rmul___cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 676, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 895, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 879, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1495, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1382, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2234, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2281, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3006, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmp951576za/bd/cbdxilgnquycifwtbqa2zleeurqkfwfjklar5m24djprgcxif6eu.py", line 80, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 479, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 499, in _wait_futures
kernel = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3508, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpwywhytlr/triton/CW3CESIT65URD24S3L3CSNKXYM4MH5DCORPARIFK6JNFTRMHQDUQ/triton_poi_fused_mul_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.float16], args=TensorList[Tensor[size=(), device="cuda:0", dtype=torch.float16]], kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive___rmul___cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,040,476,998
|
[cutlass backend][BE][clean-up] refactor to remove use of autotune_fallback_to_aten=True in cutlass backend tests
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152850
Differential Revision: [D74192001](https://our.internmc.facebook.com/intern/diff/D74192001/)
Motivation: clean up post https://github.com/pytorch/pytorch/issues/147479. I plan to leave the rest of the clean-up as an first time issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,476,700
|
Clarification on build_stage usage with DistributedDataParallel: the code example in torch.distributed.piplining
|
jin2001-2001
|
open
|
[
"oncall: distributed"
] | 0
|
NONE
|
### 📚 The doc issue
The tutorial provides the following example for how to use the build_stage helper function in `torch.distributed.piplining` :
```from torch.distributed.pipelining import build_stage
from torch.nn.parallel import DistributedDataParallel
dp_mod = DistributedDataParallel(stage_mod)
info = pipe.info()
stage = build_stage(dp_mod, stage_idx, info, device, group)
```
I am exploring how to use PyTorch to implement heterogeneous parallelism, where each pipeline stage (PP) could apply a different data parallel (DP) strategy. At first glance, the example appears to show how to wrap a stage module with DistributedDataParallel before passing it to build_stage generating a pipeline stage, implying support for such a use case.
However, after checking the input type of build_stage, it seems that the first argument is expected to be an nn.Module, while DistributedDataParallel returns a DistributedDataParallel object — not a subclass of nn.Module directly. This suggests that wrapping the stage in DDP before calling build_stage may not actually be supported.
I’d like to confirm:
1. Is this a typo or misleading in the documentation/tutorial?
2. What is the recommended way to apply second-tier DP (e.g., DDP) to a stage in a first-tier PP (pipeline-parallel) setup?
Any clarification or guidance would be appreciated. Thanks!
### Suggest a potential alternative/fix
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,040,463,307
|
Failed visualized 1D DTensor
|
wangkuiyi
|
closed
|
[
"oncall: distributed",
"module: dtensor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
To reproduce this issue, run the follow script using torchrun:
```python
import torch.distributed.tensor.debug
import torch.distributed.tensor as dt
import torch.distributed as dist
import os
rank = int(os.getenv("RANK", "0"))
def render(t, msg):
if rank == 0:
print(msg)
dt.debug.visualize_sharding(t, use_rich=False)
m = dist.init_device_mesh("cuda", (4,))
t = dt.distribute_tensor(torch.ones(4), m, [dt.Shard(dim=0)])
dt.debug.visualize_sharding(t, use_rich=True)
t = dt.distribute_tensor(torch.ones(4), m, [dt.Replicate()])
dt.debug.visualize_sharding(t, use_rich=True)
```
The first call to `visualize_sharding` would fail with
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/w/l/s.py", line 17, in <module>
[rank0]: dt.debug.visualize_sharding(t, use_rich=True)
[rank0]: File "/root/w/pytorch/torch/distributed/tensor/debug/_visualize_sharding.py", line 201, in visualize_sharding
[rank0]: (offset[1], offset[1] + shape[1] - 1),
[rank0]: ~~~~~~^^^
[rank0]: IndexError: tuple index out of range
```
### Versions
```
# pip list | grep torch
torch 2.8.0a0+git730a077 /root/w/pytorch
```
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @tianyu-l @XilunWu
| true
|
3,040,460,352
|
Clean up CUTLASS_VERSION post cutlass version update
|
henrylhtsang
|
closed
|
[
"good first issue",
"triaged",
"actionable",
"topic: not user facing",
"module: core aten"
] | 4
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
For example here
https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/cutlass_extensions/epilogue/thread/ft_fused_activations.h#L75
don't forget to remove #include <cutlass/version.h>
### Alternatives
_No response_
### Additional context
_No response_
cc @manuelcandales @SherlockNoMad @angelayi
| true
|
3,040,429,229
|
[partitioner] Fix argument to _broadcast_on_rank0
|
fmassa
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 5
|
MEMBER
|
Summary:
There was a bug when I refactored my original implementation.
This should fix it
Test Plan: Run on some internal workloads
Differential Revision: D74190485
| true
|
3,040,423,446
|
torch.distributions.Beta.entropy returns negative values
|
MaHaArt
|
closed
|
[
"module: distributions",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
Beta distribution returns negative values for perfectly legal alpha / beta parameters. It seems that the sign is flipped. The following code shows the results of the pytorch built-in function vs. a custom implementation.
```
import torch
from torch.special import digamma
def stable_beta_entropy(alpha, beta):
ab = alpha + beta
entropy = (
- torch.lgamma(alpha) - torch.lgamma(beta)
+ torch.lgamma(ab)
+ (alpha - 1) * digamma(alpha)
+ (beta - 1) * digamma(beta)
- (ab - 2) * digamma(ab)
)
return entropy.clamp(min=0.0)
if __name__ == '__main__':
alpha = torch.tensor([1.1, 1.2, 1.3])
beta = torch.tensor([1.3, 1.2, 1.1])
# Pytorch --> *NEGATIVE* Entropy, does mathemaically not make sense
dist = torch.distributions.Beta(alpha, beta)
print(f"Pytorch Beta Entropy: {dist.entropy()}") # should all be > 0
# Custom Implementation --> *POSITIVE* Entropy
print(f"Custom Beta Entropy: {stable_beta_entropy(alpha,beta)}")``
```
Output of above script:
```
Pytorch Beta Entropy: tensor([-0.0206, -0.0108, -0.0206])
Custom Beta Entropy: tensor([0.0206, 0.0108, 0.0206])
```
### Versions
PyTorch version: 2.7.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
cc @fritzo @neerajprad @alicanb @nikitaved
| true
|
3,040,407,022
|
[inductor] Allow num_program specification for TMA workspace
|
mandroid6
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary:
Allow TMA workspace creation allow specification for `num_programs`, which defaults to `num_sms` when not specified.
We need a total `num_programs * num_tma_descriptors` no. of descriptors for a kernel.
Test Plan: CI.
Differential Revision: D74189599
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,040,369,015
|
ci: Remove conda-env-macOS-ARM64, prefer pip
|
seemethere
|
open
|
[
"topic: not user facing",
"ciflow/mps"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152843
* #152545
Also introduces a setup-macos action which should give us some
flexibility on how we set up our macOS jobs
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
3,040,343,437
|
Add memory reporting for XPU to Memory Profiler
|
frost-intel
|
open
|
[
"open source",
"ciflow/trunk",
"ciflow/xpu",
"release notes: xpu"
] | 4
|
COLLABORATOR
|
Adds support for XPU profile_memory in Pytorch Profiler.
Currently, when `profile_memory=True` is passed to `torch.profiler.profile`, there is no XPU memory reported. For example, the profiling table printed by the code below is missing any `XPU Mem` columns:
<details><summary>profiling.py</summary>
<p>
```python
import torch
import torch.nn as nn
import torch.optim as optim
from torch.profiler import profile, ProfilerActivity
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.conv1 = nn.Conv1d(20,20,15,padding="same")
self.flatten = nn.Flatten()
self.net1 = nn.Linear(2048, 4096)
self.relu = nn.ReLU()
self.net2 = nn.Linear(4096, 5)
def forward(self, x):
res = self.conv1(x)
res = self.flatten(res)
res = self.net1(res)
return self.net2(self.relu(res))
def demo_basic():
model = ToyModel().to("xpu")
loss_fn = nn.MSELoss().to("xpu")
optimizer = optim.SGD(model.parameters(), lr=0.001)
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.XPU], profile_memory=True) as prof:
for epoch in range(10):
optimizer.zero_grad()
outputs = model(torch.randn(20, 2048).to("xpu"))
labels = torch.randn(20, 5).to("xpu")
loss_fn(outputs, labels).backward()
optimizer.step()
print(prof.key_averages().table(max_name_column_width=100, sort_by="xpu_time_total", row_limit=100))
if __name__ == "__main__":
demo_basic()
```
</p>
</details>
```
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self XPU Self XPU % XPU total XPU time avg CPU Mem Self CPU Mem # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
gemm_kernel 0.00% 0.000us 0.00% 0.000us 0.000us 1.501ms 44.73% 1.501ms 25.024us 0 b 0 b 60
autograd::engine::evaluate_function: AddmmBackward0 0.12% 1.067ms 30.47% 260.929ms 13.046ms 0.000us 0.00% 1.009ms 50.448us 0 b 0 b 20
AddmmBackward0 0.09% 744.983us 15.99% 136.944ms 6.847ms 0.000us 0.00% 784.640us 39.232us 0 b 0 b 20
aten::mm 15.41% 131.956ms 15.79% 135.167ms 3.379ms 784.640us 23.37% 784.640us 19.616us 0 b 0 b 40
aten::linear 0.02% 156.361us 20.58% 176.187ms 8.809ms 0.000us 0.00% 741.760us 37.088us 0 b 0 b 20
aten::addmm 20.25% 173.371ms 20.52% 175.723ms 8.786ms 741.760us 22.10% 741.760us 37.088us 0 b 0 b 20
Optimizer.step#SGD.step 0.40% 3.429ms 5.55% 47.509ms 4.751ms 0.000us 0.00% 488.960us 48.896us 0 b 0 b 10
aten::_foreach_add_ 4.81% 41.162ms 5.15% 44.080ms 4.408ms 488.960us 14.57% 488.960us 48.896us 0 b 0 b 10
at::native::xpu::MultiTensorApplyKernelFunctor<at::n... 0.00% 0.000us 0.00% 0.000us 0.000us 422.880us 12.60% 422.880us 42.288us 0 b 0 b 10
autograd::engine::evaluate_function: ConvolutionBack... 0.03% 280.041us 4.36% 37.328ms 3.733ms 0.000us 0.00% 356.320us 35.632us 0 b 0 b 10
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 856.227ms
Self XPU time total: 3.357ms
```
This PR updates the XPUCachingAllocator.cpp to report allocation events to the Profiler, and causes these to be printed in the table:
```
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self XPU Self XPU % XPU total XPU time avg CPU Mem Self CPU Mem XPU Mem Self XPU Mem # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
gemm_kernel 0.00% 0.000us 0.00% 0.000us 0.000us 1.436ms 43.64% 1.436ms 23.939us 0 b 0 b 0 b 0 b 60
autograd::engine::evaluate_function: AddmmBackward0 0.13% 1.186ms 29.92% 262.875ms 13.144ms 0.000us 0.00% 1.005ms 50.272us 0 b 0 b 320.94 Mb -4.69 Mb 20
AddmmBackward0 0.09% 815.288us 16.48% 144.802ms 7.240ms 0.000us 0.00% 790.720us 39.536us 0 b 0 b 325.47 Mb 0 b 20
aten::mm 15.86% 139.342ms 16.26% 142.875ms 3.572ms 790.720us 24.03% 790.720us 19.768us 0 b 0 b 325.47 Mb 325.47 Mb 40
aten::linear 0.02% 182.856us 20.46% 179.775ms 8.989ms 0.000us 0.00% 669.440us 33.472us 0 b 0 b 3.13 Mb 0 b 20
aten::addmm 20.10% 176.607ms 20.40% 179.210ms 8.961ms 669.440us 20.34% 669.440us 33.472us 0 b 0 b 3.13 Mb 3.13 Mb 20
Optimizer.step#SGD.step 0.42% 3.692ms 5.61% 49.267ms 4.927ms 0.000us 0.00% 486.640us 48.664us 0 b 0 b 0 b 0 b 10
aten::_foreach_add_ 4.83% 42.439ms 5.19% 45.574ms 4.557ms 486.640us 14.79% 486.640us 48.664us 0 b 0 b 0 b -20.00 Kb 10
at::native::xpu::MultiTensorApplyKernelFunctor<at::n... 0.00% 0.000us 0.00% 0.000us 0.000us 420.960us 12.79% 420.960us 42.096us 0 b 0 b 0 b 0 b 10
autograd::engine::evaluate_function: ConvolutionBack... 0.04% 310.719us 4.47% 39.279ms 3.928ms 0.000us 0.00% 339.520us 33.952us 0 b 0 b -2.89 Mb -3.12 Mb 10
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 878.627ms
Self XPU time total: 3.291ms
```
These XPU memory numbers match the same profiling results on CUDA.
| true
|
3,040,305,275
|
[ROCm] opportunistic fastatomics - fix build error with newer compilers
|
pragupta
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/inductor-rocm",
"ciflow/rocm-mi300"
] | 7
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,040,294,257
|
[precompile] Add BundledAOTAutogradCacheEntry
|
jamesjwu
|
open
|
[
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #153064
* __->__ #152840
Finally, this PR adds BundledAOTAutogradCacheEntry. A BundledAOTAutogradCacheEntry is an AOTAutogradCacheEntry that saves the entire CompiledFxGraph directly in the entry.
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.