id
int64
2.74B
3.05B
title
stringlengths
1
255
user
stringlengths
2
26
state
stringclasses
2 values
labels
listlengths
0
24
comments
int64
0
206
author_association
stringclasses
4 values
body
stringlengths
7
62.5k
is_title
bool
1 class
2,818,126,325
DISABLED test_return_tuple (__main__.TestScript)
pytorch-bot[bot]
closed
[ "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: dynamo" ]
1
NONE
Platforms: dynamo This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_return_tuple&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36340322722). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_return_tuple` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/test_jit.py", line 11331, in test_return_tuple def test_return_tuple(self): File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript source = textwrap.dedent(inspect.getsource(script)) ~~~~~~~~~~~~~~~~~^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource lines, lnum = getsourcelines(object) ~~~~~~~~~~~~~~^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines lines, lnum = findsource(object) ~~~~~~~~~~^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource lines = linecache.getlines(file, module.__dict__) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__ return self._torchdynamo_orig_callable( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ frame, cache_entry, self.hooks, frame_state, skip=1 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__ result = self._inner_convert( frame, cache_entry, hooks, frame_state, skip=skip + 1 ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__ return _compile( frame.f_code, ...<14 lines>... skip=skip + 1, ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile raise InternalTorchDynamoError( f"{type(e).__qualname__}: {str(e)}" ).with_traceback(e.__traceback__) from None File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function return function(*args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner return _compile_inner(code, one_graph, hooks, transform) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner out_code = transform_code_object(code, transform) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object transformations(instructions, code_options) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn return fn(*args, **kwargs) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform tracer.run() ~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run super().run() ~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run while self.step(): ~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step self.dispatch_table[inst.opcode](self, inst) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE self._return(inst) ~~~~~~~~~~~~^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return and not self.symbolic_locals_contain_module_class() ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class if isinstance(v, UserDefinedClassVariable) and issubclass( ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__ instance = instance.realize() File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize self._cache.realize() ~~~~~~~~~~~~~~~~~~~^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize self.vt = VariableTracker.build(tx, self.value, source) ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build return builder.VariableBuilder(tx, source)(value) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^ File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__ vt = self._wrap(value) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap result = dict( build_key_value(i, k, v) for i, (k, v) in enumerate(get_items_from_dict(value)) ) File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr> for i, (k, v) in enumerate(get_items_from_dict(value)) ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration from user code: File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines return cache[filename][2] Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True To execute this test, run the following from the base repo dir: PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_return_tuple This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `test_jit.py` ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_jit.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0) headers: {} cc @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
true
2,818,020,380
Publish pytorch RC docker images before release
bhack
closed
[ "module: binaries", "oncall: releng", "triaged", "module: docker" ]
6
CONTRIBUTOR
### 🚀 The feature, motivation and pitch I think that we need at least have RC images published before we do the final image release as currently users can test only the final release: https://hub.docker.com/r/pytorch/pytorch ### Alternatives _No response_ ### Additional context _No response_ cc @seemethere @malfet @osalpekar @atalman
true
2,817,692,954
DISABLED test_dtensor_seq_par_shard_dim_0 (__main__.MicroPipelineTPTest)
pytorch-bot[bot]
open
[ "triaged", "module: flaky-tests", "skipped", "module: c10d" ]
4
NONE
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dtensor_seq_par_shard_dim_0&suite=MicroPipelineTPTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36332698978). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_dtensor_seq_par_shard_dim_0` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/var/lib/jenkins/pytorch/test/distributed/tensor/parallel/test_micro_pipeline_tp.py", line 425, in test_dtensor_seq_par self.assertIn("fused_all_gather_matmul", code) File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1112, in assertIn self.fail(self._formatMessage(msg, standardMsg)) File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail raise self.failureException(msg) AssertionError: 'fused_all_gather_matmul' not found in '# AOT ID: [\'0_forward\']\nfrom ctypes import c_void_p, c_long, c_int\nimport torch\nimport math\nimport random\nimport os\nimport tempfile\nfrom math import inf, nan\nfrom cmath import nanj\nfrom torch._inductor.hooks import run_intermediate_hooks\nfrom torch._inductor.utils import maybe_profile\nfrom torch._inductor.codegen.memory_planning import _align as align\nfrom torch import device, empty_strided\nfrom torch._inductor.async_compile import AsyncCompile\nfrom torch._inductor.select_algorithm import extern_kernels\nfrom torch._inductor.codegen.multi_kernel import MultiKernelCall\nimport triton\nimport triton.language as tl\nfrom torch._inductor.runtime.triton_heuristics import (\n grid,\n split_scan_grid,\n grid_combo_kernels,\n start_graph,\n end_graph,\n cooperative_reduction_grid,\n)\nfrom torch._C import _cuda_getCurrentRawStream as get_raw_stream\nfrom torch._C import _cuda_getCurrentRawStream as get_raw_stream\n\naten = torch.ops.aten\ninductor_ops = torch.ops.inductor\n_quantized = torch.ops._quantized\nassert_size_stride = torch._C._dynamo.guards.assert_size_stride\nempty_strided_cpu = torch._C._dynamo.guards._empty_strided_cpu\nempty_strided_cuda = torch._C._dynamo.guards._empty_strided_cuda\nempty_strided_xpu = torch._C._dynamo.guards._empty_strided_xpu\nreinterpret_tensor = torch._C._dynamo.guards._reinterpret_tensor\nalloc_from_pool = torch.ops.inductor._alloc_from_pool\nasync_compile = AsyncCompile()\nempty_strided_p2p = torch._C._distributed_c10d._SymmetricMemory.empty_strided_p2p\n\n\n# kernel path: /tmp/tmp4rsnqzr5/sd/csdkwmy2wmyqm3e7apig34zpheymhb5ytu7whjukqo6ftyrvn4wx.py\n# Topologically Sorted Source Nodes: [linear], Original ATen: [aten.mm]\n# Source node to ATen node mapping:\n# linear => constant_pad_nd_default\n# Graph fragment:\n# %constant_pad_nd_default : [num_users=1] = call_function[target=torch.ops.aten.constant_pad_nd.default](args = (%wait_tensor, [0, 2, 0, 0]), kwargs = {})\ntriton_poi_fused_mm_0 = async_compile.triton(\'triton_poi_fused_mm_0\', \'\'\'\nimport triton\nimport triton.language as tl\nfrom triton.compiler.compiler import AttrsDescriptor\n\nfrom torch._inductor.runtime import triton_helpers, triton_heuristics\nfrom torch._inductor.runtime.triton_helpers import libdevice, math as tl_math\nfrom torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties\ntriton_helpers.set_driver_to_gpu()\n\n@triton_heuristics.pointwise(\n size_hints={\'x\': 256}, \n filename=__file__,\n triton_meta={\'signature\': {\'in_ptr0\': \'*fp32\', \'out_ptr0\': \'*fp32\', \'xnumel\': \'i32\'}, \'device\': DeviceProperties(type=\'hip\', index=0, multi_processor_count=104, cc=\'gfx90a\', major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=64), \'constants\': {}, \'configs\': [AttrsDescriptor.from_dict({\'arg_properties\': {\'tt.divisibility\': (0, 1, 2), \'tt.equal_to\': ()}, \'cls\': \'AttrsDescriptor\'})]},\n inductor_meta={\'autotune_hints\': set(), \'kernel_name\': \'triton_poi_fused_mm_0\', \'mutated_arg_names\': [], \'optimize_mem\': False, \'no_x_dim\': False, \'num_load\': 1, \'num_reduction\': 0, \'backend_hash\': \'F6932448A4E62C51BD00A53B3A5B319A01AE443E644330D18726F033C2FD6BBE\', \'are_deterministic_algorithms_enabled\': False, \'assert_indirect_indexing\': True, \'autotune_local_cache\': True, \'autotune_pointwise\': True, \'autotune_remote_cache\': None, \'force_disable_caches\': False, \'dynamic_scale_rblock\': True, \'max_autotune\': False, \'max_autotune_pointwise\': False, \'min_split_scan_rblock\': 256, \'spill_threshold\': 16, \'store_cubin\': False, \'is_hip\': True},\n min_elem_per_thread=0\n)\n@triton.jit\ndef triton_poi_fused_mm_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):\n xnumel = 192\n xoffset = tl.program_id(0) * XBLOCK\n xindex = xoffset + tl.arange(0, XBLOCK)[:]\n xmask = xindex < xnumel\n x0 = (xindex % 12)\n x1 = xindex // 12\n x2 = xindex\n tmp0 = x0\n tmp1 = tl.full([1], 10, tl.int64)\n tmp2 = tmp0 < tmp1\n tmp3 = tmp2.to(tl.int1)\n tmp4 = tl.load(in_ptr0 + (x0 + 10*x1), tmp3 & xmask, other=0.0)\n tl.store(out_ptr0 + (x2), tmp4, xmask)\n\'\'\', device_str=\'cuda\')\n\n\n# kernel path: /tmp/tmp4rsnqzr5/5n/c5nsbqhb7krrbf2kn6zlpa5mnpq4uvngx473ztqryvsqxkkqdv3s.py\n# Topologically Sorted Source Nodes: [linear], Original ATen: [aten.mm]\n# Source node to ATen node mapping:\n# linear => constant_pad_nd_default_1\n# Graph fragment:\n# %constant_pad_nd_default_1 : [num_users=1] = call_function[target=torch.ops.aten.constant_pad_nd.default](args = (%permute, [0, 0, 0, 2]), kwargs = {})\ntriton_poi_fused_mm_1 = async_compile.triton(\'triton_poi_fused_mm_1\', \'\'\'\nimport triton\nimport triton.language as tl\nfrom triton.compiler.compiler import AttrsDescriptor\n\nfrom torch._inductor.runtime import triton_helpers, triton_heuristics\nfrom torch._inductor.runtime.triton_helpers import libdevice, math as tl_math\nfrom torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties\ntriton_helpers.set_driver_to_gpu()\n\n@triton_heuristics.pointwise(\n size_hints={\'x\': 128}, \n filename=__file__,\n triton_meta={\'signature\': {\'in_ptr0\': \'*fp32\', \'out_ptr0\': \'*fp32\', \'xnumel\': \'i32\'}, \'device\': DeviceProperties(type=\'hip\', index=0, multi_processor_count=104, cc=\'gfx90a\', major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=64), \'constants\': {}, \'configs\': [AttrsDescriptor.from_dict({\'arg_properties\': {\'tt.divisibility\': (0, 1, 2), \'tt.equal_to\': ()}, \'cls\': \'AttrsDescriptor\'})]},\n inductor_meta={\'autotune_hints\': set(), \'kernel_name\': \'triton_poi_fused_mm_1\', \'mutated_arg_names\': [], \'optimize_mem\': False, \'no_x_dim\': False, \'num_load\': 1, \'num_reduction\': 0, \'backend_hash\': \'F6932448A4E62C51BD00A53B3A5B319A01AE443E644330D18726F033C2FD6BBE\', \'are_deterministic_algorithms_enabled\': False, \'assert_indirect_indexing\': True, \'autotune_local_cache\': True, \'autotune_pointwise\': True, \'autotune_remote_cache\': None, \'force_disable_caches\': False, \'dynamic_scale_rblock\': True, \'max_autotune\': False, \'max_autotune_pointwise\': False, \'min_split_scan_rblock\': 256, \'spill_threshold\': 16, \'store_cubin\': False, \'is_hip\': True},\n min_elem_per_thread=0\n)\n@triton.jit\ndef triton_poi_fused_mm_1(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):\n xnumel = 96\n xoffset = tl.program_id(0) * XBLOCK\n xindex = xoffset + tl.arange(0, XBLOCK)[:]\n xmask = xindex < xnumel\n x0 = (xindex % 12)\n x1 = xindex // 12\n x2 = xindex\n tmp0 = x0\n tmp1 = tl.full([1], 10, tl.int64)\n tmp2 = tmp0 < tmp1\n tmp3 = tmp2.to(tl.int1)\n tmp4 = tl.load(in_ptr0 + (x0 + 10*x1), tmp3 & xmask, other=0.0)\n tl.store(out_ptr0 + (x2), tmp4, xmask)\n\'\'\', device_str=\'cuda\')\n\n\n# kernel path: /tmp/tmp4rsnqzr5/4v/c4vojku6k2yymfhngu74bmljpx2ib3cus653l3lcaiuxp2x7ckvo.py\n# Topologically Sorted Source Nodes: [input_tensor_2], Original ATen: [aten.relu]\n# Source node to ATen node mapping:\n# input_tensor_2 => relu\n# Graph fragment:\n# %relu : [num_users=2] = call_function[target=torch.ops.aten.relu.default](args = (%mm_default,), kwargs = {})\ntriton_poi_fused_relu_2 = async_compile.triton(\'triton_poi_fused_relu_2\', \'\'\'\nimport triton\nimport triton.language as tl\nfrom triton.compiler.compiler import AttrsDescriptor\n\nfrom torch._inductor.runtime import triton_helpers, triton_heuristics\nfrom torch._inductor.runtime.triton_helpers import libdevice, math as tl_math\nfrom torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties\ntriton_helpers.set_driver_to_gpu()\n\n@triton_heuristics.pointwise(\n size_hints={\'x\': 128}, \n filename=__file__,\n triton_meta={\'signature\': {\'in_out_ptr0\': \'*fp32\', \'xnumel\': \'i32\'}, \'device\': DeviceProperties(type=\'hip\', index=0, multi_processor_count=104, cc=\'gfx90a\', major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=64), \'constants\': {}, \'configs\': [AttrsDescriptor.from_dict({\'arg_properties\': {\'tt.divisibility\': (0, 1), \'tt.equal_to\': ()}, \'cls\': \'AttrsDescriptor\'})]},\n inductor_meta={\'autotune_hints\': set(), \'kernel_name\': \'triton_poi_fused_relu_2\', \'mutated_arg_names\': [\'in_out_ptr0\'], \'optimize_mem\': False, \'no_x_dim\': False, \'num_load\': 1, \'num_reduction\': 0, \'backend_hash\': \'F6932448A4E62C51BD00A53B3A5B319A01AE443E644330D18726F033C2FD6BBE\', \'are_deterministic_algorithms_enabled\': False, \'assert_indirect_indexing\': True, \'autotune_local_cache\': True, \'autotune_pointwise\': True, \'autotune_remote_cache\': None, \'force_disable_caches\': False, \'dynamic_scale_rblock\': True, \'max_autotune\': False, \'max_autotune_pointwise\': False, \'min_split_scan_rblock\': 256, \'spill_threshold\': 16, \'store_cubin\': False, \'is_hip\': True},\n min_elem_per_thread=0\n)\n@triton.jit\ndef triton_poi_fused_relu_2(in_out_ptr0, xnumel, XBLOCK : tl.constexpr):\n xnumel = 128\n xoffset = tl.program_id(0) * XBLOCK\n xindex = xoffset + tl.arange(0, XBLOCK)[:]\n xmask = xindex < xnumel\n x0 = xindex\n tmp0 = tl.load(in_out_ptr0 + (x0), xmask)\n tmp1 = tl.full([1], 0, tl.int32)\n tmp2 = triton_helpers.maximum(tmp1, tmp0)\n tl.store(in_out_ptr0 + (x0), tmp2, xmask)\n\'\'\', device_str=\'cuda\')\n\n\nasync_compile.wait(globals())\ndel async_compile\n\ndef call(args):\n primals_1, primals_2, primals_3 = args\n args.clear()\n assert_size_stride(primals_1, (8, 10), (10, 1))\n assert_size_stride(primals_2, (8, 10), (10, 1))\n assert_size_stride(primals_3, (10, 8), (8, 1))\n with torch.cuda._DeviceGuard(0):\n torch.cuda.set_device(0)\n # Topologically Sorted Source Nodes: [input_tensor_1], Original ATen: [_c10d_functional.all_gather_into_tensor]\n buf0 = torch.ops._c10d_functional.all_gather_into_tensor.default(primals_1, 2, \'0\')\n assert_size_stride(buf0, (16, 10), (10, 1))\n # Topologically Sorted Source Nodes: [input_tensor_1], Original ATen: [_c10d_functional.wait_tensor]\n torch.ops._c10d_functional.wait_tensor.default(buf0)\n del primals_1\n buf3 = empty_strided_cuda((16, 12), (12, 1), torch.float32)\n # Topologically Sorted Source Nodes: [linear], Original ATen: [aten.mm]\n stream0 = get_raw_stream(0)\n triton_poi_fused_mm_0.run(buf0, buf3, 192, grid=grid(192), stream=stream0)\n buf4 = empty_strided_cuda((12, 8), (1, 12), torch.float32)\n # Topologically Sorted Source Nodes: [linear], Original ATen: [aten.mm]\n stream0 = get_raw_stream(0)\n triton_poi_fused_mm_1.run(primals_2, buf4, 96, grid=grid(96), stream=stream0)\n del primals_2\n buf5 = empty_strided_cuda((16, 8), (8, 1), torch.float32)\n # Topologically Sorted Source Nodes: [linear], Original ATen: [aten.mm]\n extern_kernels.mm(buf3, buf4, out=buf5)\n del buf3\n del buf4\n buf6 = buf5; del buf5 # reuse\n # Topologically Sorted Source Nodes: [input_tensor_2], Original ATen: [aten.relu]\n stream0 = get_raw_stream(0)\n triton_poi_fused_relu_2.run(buf6, 128, grid=grid(128), stream=stream0)\n # Topologically Sorted Source Nodes: [], Original ATen: []\n buf7 = torch.ops.symm_mem.fused_matmul_reduce_scatter.default(buf6, reinterpret_tensor(primals_3, (8, 10), (1, 8), 0), \'sum\', 0, \'0\')\n buf8 = buf7\n del buf7\n return (buf8, buf0, buf6, primals_3, )\n\n\ndef benchmark_compiled_module(times=10, repeat=10):\n from torch._dynamo.testing import rand_strided\n from torch._inductor.utils import print_performance\n primals_1 = rand_strided((8, 10), (10, 1), device=\'cuda:0\', dtype=torch.float32)\n primals_2 = rand_strided((8, 10), (10, 1), device=\'cuda:0\', dtype=torch.float32)\n primals_3 = rand_strided((10, 8), (8, 1), device=\'cuda:0\', dtype=torch.float32)\n fn = lambda: call([primals_1, primals_2, primals_3])\n return print_performance(fn, times=times, repeat=repeat)\n\n\nif __name__ == "__main__":\n from torch._inductor.wrapper_benchmark import compiled_module_main\n compiled_module_main(\'None\', benchmark_compiled_module)\n' To execute this test, run the following from the base repo dir: PYTORCH_TEST_WITH_ROCM=1 python test/distributed/tensor/parallel/test_micro_pipeline_tp.py MicroPipelineTPTest.test_dtensor_seq_par_shard_dim_0 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `distributed/tensor/parallel/test_micro_pipeline_tp.py` cc @clee2000 @wdvr
true
2,817,589,102
Update mi300 labels to account for multiple clusters.
saienduri
closed
[ "module: rocm", "triaged", "open source", "Merged", "Reverted", "topic: not user facing", "ci-no-td" ]
11
CONTRIBUTOR
We now have multiple Kubernetes clusters of mi300x resources, and this commit updates labels accordingly to target both clusters evenly. cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
true
2,817,587,352
Update NestedInt equality to take into account all metadata
soulitzer
open
[ "module: cpu", "release notes: fx", "fx", "module: dynamo", "ciflow/inductor", "no-stale" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #146172 * #146101 * __->__ #145922 * #141842 * #141841 * #146052 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,817,550,137
Add retain-output argument
kfojcik-intel
closed
[ "triaged", "open source", "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo" ]
18
CONTRIBUTOR
This PR add retain-output argument which enables appending to the already existing output file if it exists instead of deleting it and creating a new one. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,817,546,248
Add basic Gaudi support to benchmarks/dynamo
kfojcik-intel
closed
[ "triaged", "open source", "Merged", "ciflow/trunk", "topic: not user facing", "oncall: pt2", "module: dynamo" ]
13
CONTRIBUTOR
This PR adds basic Gaudi support to benchmarks/dynamo cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,817,379,388
DISABLED test_comprehensive_any_cuda_float32 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
closed
[ "module: rocm", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
2
NONE
Platforms: rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_any_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36318896504). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_comprehensive_any_cuda_float32` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper return test(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1444, in only_fn return fn(self, *args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2262, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn return fn(slf, *args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1620, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1542, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched return func(*newargs, **newkeywargs) File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 949, in inner raise e File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 941, in inner fn(self, device, dtype, op) File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1188, in test_comprehensive raise e File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1148, in test_comprehensive self.check_model_gpu( File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 628, in check_model_gpu check_model( File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 510, in check_model self.assertEqual( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4042, in assertEqual raise error_metas.pop()[0].to_error( # type: ignore[index] AssertionError: Scalars are not close! Expected True but got True. Absolute difference: 0 (up to 1.5e-05 allowed) Relative difference: 0.0 (up to 1.3e-05 allowed) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1620, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper raise e_tracked from e Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.float32], args=(), kwargs={}, broadcasts_input=False, name='') To execute this test, run the following from the base repo dir: PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_any_cuda_float32 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `inductor/test_torchinductor_opinfo.py` cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,817,299,863
[Customized Optimus] Add select cat aten pass
mengluy0125
closed
[ "fb-exported", "Merged", "ciflow/trunk", "module: inductor", "ciflow/inductor", "release notes: inductor", "inductor_pattern_match" ]
5
CONTRIBUTOR
Summary: This is a follow up work of D68695717, where we can further reduce the number of cat kernels in the backward by designing new aten pass in the aten level. Test Plan: # unit test ``` buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:split_cat_fx_aten_passes -- test_select_cat_post_grad ``` Buck UI: https://www.internalfb.com/buck2/6943087f-91be-4dbd-9693-df0a11a50b73 Test UI: https://www.internalfb.com/intern/testinfra/testrun/11821949087998233 Network: Up: 101KiB Down: 132KiB (reSessionID-60e898af-f366-4247-a9f7-d8d7cd129fe0) Analyzing targets. Remaining 0/78148 Executing actions. Remaining 0/476147 Command: test. Finished 2 local Tests finished: Pass 3. Fail 0. Fatal 0. Skip 0. Build failure 0 # E2E ### how to add the config ``` post_grad_fusion_options: { "normalization_aten_pass": {}, "split_cat_aten_pass": {}, "select_cat_aten_pass": {}, } ``` {F1974778773} baseline: aps-recgpt_ranking_1115_pt2_optimus-e52c1f277e proposal aps-recgpt_ranking_1115_pt2_optimus-1b0047ee0e Differential Revision: D68803384 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,817,270,391
Draft: fix: Some smaller mingw fixes
joda01
closed
[ "open source", "Stale" ]
4
NONE
Fixes #ISSUE_NUMBER
true
2,817,266,886
[inductor] Add typing to common.KernelArgs
jansel
closed
[ "Merged", "Reverted", "topic: not user facing", "module: inductor", "ciflow/inductor", "ci-no-td" ]
7
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #146226 * #146225 * #145993 * __->__ #145916 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,817,198,403
[inductor] Add typing to common.OpDecompositions
jansel
closed
[ "Merged", "topic: not user facing", "ciflow/mps", "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #146235 * #146226 * #146225 * #145993 * #145916 * __->__ #145915 * #145914 * #145913 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,817,198,349
[inductor] Combine regexp checks in OpOverrides.paren
jansel
closed
[ "Merged", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #146235 * #146226 * #146225 * #145993 * #145916 * #145915 * __->__ #145914 * #145913 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,817,198,218
[inductor] Add types to DeviceOpOverrides
jansel
closed
[ "Merged", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #146235 * #146226 * #146225 * #145993 * #145916 * #145915 * #145914 * __->__ #145913 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,817,159,357
Deprecate conditional view for `torch.reshape`
kurtamohler
closed
[ "module: cpu", "open source", "release notes: python_frontend" ]
2
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145912 * #145911 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
true
2,817,159,292
Add future lazy clone setting and deprecate `torch.reshape` view
kurtamohler
open
[ "oncall: distributed", "module: cpu", "open source", "release notes: python_frontend", "module: inductor", "module: dynamo", "ciflow/inductor" ]
3
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145911 cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov @ColinPeppler
true
2,817,150,655
Fix redundant move
cyyever
closed
[ "oncall: jit", "open source", "NNC", "Stale", "release notes: jit" ]
2
COLLABORATOR
Fixes #ISSUE_NUMBER cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
true
2,817,133,209
[dynamo][builin-skipfiles-cleanup] Remove types
anijain2305
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * (to be filled) cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,817,102,438
Debug perf regression
huydhn
closed
[ "module: dynamo", "ciflow/inductor", "no-runner-experiments", "ciflow/inductor-periodic", "test-config/inductor_torchbench_smoketest_perf" ]
2
CONTRIBUTOR
DEBUG, no need to review cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,817,096,807
There may be a documentation Error in torch.nn.CrossEntropyLoss Formula
MJWade96
open
[ "module: docs", "module: loss", "triaged" ]
1
NONE
### 📚 The doc issue I am not very sure if this is an error, or just a mistake of my understanding. Documentation Location: (https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) In the first list point(Class indices in the range ...), it says the range is [0, C), but in the formula for l_n, the summation starts from c=1 and ends at C. ### Suggest a potential alternative/fix _No response_ cc @svekars @brycebortree @sekyondaMeta @AlannaBurke
true
2,817,096,461
Use std::string_view
cyyever
closed
[ "oncall: jit", "open source", "Merged", "ciflow/trunk", "topic: not user facing", "ciflow/periodic" ]
9
COLLABORATOR
Fixes #ISSUE_NUMBER cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
true
2,817,090,553
Let PYTORCH_NO_CUDA_MEMORY_CACHING has effect only when value is 1
cyyever
closed
[ "triaged", "open source", "Merged", "ciflow/trunk", "release notes: cuda", "topic: bug fixes" ]
10
COLLABORATOR
Fixes #145661
true
2,817,064,531
[inductor] add size-asserts for fallback ops
shunting314
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "module: dynamo", "ciflow/inductor", "keep-going" ]
26
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145904 Fix https://github.com/pytorch/pytorch/issues/144717 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,817,031,480
[NJT] Add cumsum support for nested tensors
ketansingh
open
[ "fb-exported", "Stale", "topic: improvements", "release notes: nested tensor" ]
10
NONE
Summary: - Add cumsum support for NT Test Plan: - added unit tests Differential Revision: D68307097
true
2,817,016,949
xpu: installed pytorch is missing aten xpu ops headers (ATen/ops/cat_xpu_dispatch.h and others)
dvrogozh
open
[ "module: binaries", "module: build", "triaged", "module: xpu" ]
12
CONTRIBUTOR
With https://github.com/pytorch/pytorch/commit/635b98fa087fa21acfdf35e95e0f2c2f56064605. As first noted in https://github.com/pytorch/pytorch/pull/132945#discussion_r1931407357, pytorch built with XPU backend is missing a range of ATen operator headers (missing installed: they are generated, but not actually installed). Specifically, only those headers are available which are generated from torch stock sources (i.e. from `./aten/src/ATen/native/native_functions.yaml`), these are: ``` $ find torch/include/ATen/ops -name "*xpu*" torch/include/ATen/ops/baddbmm_xpu_dispatch.h torch/include/ATen/ops/bmm_xpu_dispatch.h torch/include/ATen/ops/mm_xpu_dispatch.h torch/include/ATen/ops/_addmm_activation_xpu_dispatch.h torch/include/ATen/ops/addmm_xpu_dispatch.h torch/include/ATen/ops/addmv_xpu_dispatch.h torch/include/ATen/ops/addbmm_xpu_dispatch.h ``` For CUDA a lot more are generated (from the same .yaml file): ``` $ find torch/include/ATen/ops -name "*cuda*" | wc -l 604 ``` For XPU however, those missing headers are generated separately from [torch-xpu-ops](https://github.com/intel/torch-xpu-ops). These are defined by https://github.com/intel/torch-xpu-ops/blob/main/yaml/native/native_functions.yaml. These XPU headers are generated, but not installed. These missing headers might be used by some third party projects. On pytorch level these headers are used in some tests. For example, see https://github.com/pytorch/pytorch/pull/138088 which needs `ATen/ops/cat_*_dispatch.h` and `ATen/ops/norm_*_dispatch.h`. **Can missing aten xpu headers be installed?** I think that current challenge is that torch-xpu-ops generates not only XPU specific files from https://github.com/intel/torch-xpu-ops/blob/main/yaml/native/native_functions.yaml, but generic files as well. I.e. we get duplicated files generated from stock pytorch sources and from xpu sources. For example: ``` $ find . -name _aminmax.h ./build/aten/src/ATen/ops/_aminmax.h ./build/xpu/ATen/ops/_aminmax.h ./torch/include/ATen/ops/_aminmax.h ``` I guess there are 2 ways forward: 1. Filter out non-XPU header files and install only them 2. Move XPU headers generation to stock pytorch and drop XPU torch-xpu-ops level customization CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5 @albanD cc @seemethere @malfet @osalpekar @atalman @gujinghui @EikanWang @fengyuan14 @guangyey
true
2,817,008,269
[export] nested terms in nn_module_stack deserialization
pianpwk
closed
[ "fb-exported", "Merged", "ciflow/trunk", "ciflow/inductor", "release notes: export" ]
7
CONTRIBUTOR
Summary: accounting for terms like "getattr(getattr(a[0], b), c)". Test Plan: test_serialize Differential Revision: D68784736
true
2,817,007,138
DISABLED test_conv_backward_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
pytorch-bot[bot]
closed
[ "module: rocm", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
3
NONE
Platforms: rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_conv_backward_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36309985985). Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_conv_backward_dynamic_shapes_cuda` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 9482, in test_conv_backward self.common( File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 628, in check_model_gpu check_model( File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 417, in check_model eager_result = model(*ref_inputs, **ref_kwargs) File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 9418, in fn out1 = aten.convolution_backward( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 1149, in __call__ return self._op(*args, **(kwargs or {})) MemoryError: std::bad_alloc To execute this test, run the following from the base repo dir: PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_conv_backward_dynamic_shapes_cuda This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `inductor/test_torchinductor_dynamic_shapes.py` cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,817,003,010
torch.compile + torch.autograd.grad: grad can be implicitly created only for scalar outputs
xmfan
open
[ "triaged", "oncall: pt2", "module: aotdispatch", "module: dynamo", "module: pt2-dispatcher" ]
3
MEMBER
### 🐛 Describe the bug ```python import torch from torch import nn class Model(nn.Module): def __init__(self): super().__init__() self.a = nn.Linear(10, 10) self.b = nn.Linear(10, 10) self.c = nn.Linear(10, 10) def forward(self, x): ia = self.a(x) ib = self.b(ia) return self.c(ib) def compute_loss(actual): expected = torch.ones_like(actual) return nn.MSELoss()(actual, expected) def train(model): x = torch.randn(10, 10) out = model(x) loss = compute_loss(out) loss.backward() def train_split(model): x = torch.randn(10, 10) out = model(x) # Step 1: Compute the loss loss = compute_loss(out) # Step 2: Compute the gradient of the loss w.r.t. `out` grads_out = torch.autograd.grad(loss, out, retain_graph=True, allow_unused=True)[0] # Step 3: Compute the gradients of `out` w.r.t. the parameters of `c` grads_c = torch.autograd.grad( outputs=out, inputs=[model.c.weight, model.c.bias], grad_outputs=grads_out, retain_graph=True ) model.c.weight.grad, model.c.bias.grad = grads_c # Step 4: Compute the gradients of `out` w.r.t. the parameters of `b` grads_ib = torch.autograd.grad( outputs=out, inputs=[model.b.weight, model.b.bias], grad_outputs=grads_out, retain_graph=True ) model.b.weight.grad, model.b.bias.grad = grads_ib # Step 5: Compute the gradients of `out` w.r.t. the parameters of `a` grads_ia = torch.autograd.grad( outputs=out, inputs=[model.a.weight, model.a.bias], grad_outputs=grads_out, ) model.a.weight.grad, model.a.bias.grad = grads_ia def test(fn): torch.manual_seed(0) model = Model() fn(model) return [param.grad for param in model.parameters()] print("Running eager, train") grads = test(train) print("Running eager, train_split") grads_split = test(train_split) assert all(torch.equal(g1, g2) for g1, g2 in zip(grads, grads_split)) print("Running compiled, train") c_train = torch.compile(train, backend="aot_eager") c_grads = test(c_train) assert all(torch.equal(g1, g2) for g1, g2 in zip(grads, c_grads)) print("Running compiled, train_split") c_train_split = torch.compile(train_split, backend="aot_eager") c_grads_split = test(c_train_split) # <-- errors assert all(torch.equal(g1, g2) for g1, g2 in zip(grads, c_grads_split)) ``` ``` Running eager, train Running eager, train_split Running compiled, train Running compiled, train_split Traceback (most recent call last): File "/data/users/xmfan/core/b/pytorch/err_bwd.py", line 87, in <module> c_grads_split = test(c_train_split) File "/data/users/xmfan/core/b/pytorch/err_bwd.py", line 68, in test fn(model) File "/data/users/xmfan/core/b/pytorch/torch/_dynamo/eval_frame.py", line 574, in _fn return fn(*args, **kwargs) File "/data/users/xmfan/core/b/pytorch/err_bwd.py", line 36, in train_split grads_out = torch.autograd.grad(loss, out, retain_graph=True, allow_unused=True)[0] File "/data/users/xmfan/core/b/pytorch/err_bwd.py", line 39, in torch_dynamo_resume_in_train_split_at_36 grads_c = torch.autograd.grad( File "/data/users/xmfan/core/b/pytorch/torch/autograd/__init__.py", line 475, in grad grad_outputs_ = _make_grads( File "/data/users/xmfan/core/b/pytorch/torch/autograd/__init__.py", line 199, in _make_grads raise RuntimeError( RuntimeError: grad can be implicitly created only for scalar outputs ``` ### Versions main cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @bdhirsh @yf225
true
2,816,990,621
re-use FloorDiv for RShift
ColinPeppler
closed
[ "module: cpu", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
I encountered this C++ compilation error. ``` 579 | int64_t var_6 = (static_cast<int64_t>(std::floor((1.0/2.0)*u0)) | static_cast<int64_t>(std::floor((1.0/4.0)*static_cast<int64_t>(std::floor((1.0/2.0)*u0))))) | std::floor((1.0/16.0)*(static_cast<int64_t>(std::floor((1.0/2.0)*u0)) | static_cast<int64_t>(std::floor((1.0/4.0)*static_cast<int64_t>(std::floor((1.0/2.0)*u0)))))); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | int64_t {aka long int} double ``` Then, I figured out where this std::floor came from with the help of Bob's guard provenance tool. It comes from RShift which is used in `triton.next_power_of_2`. --- Before, we used `std::floor` ``` int64_t var_6 = ( static_cast<int64_t>(std::floor((1.0/2.0)*u0)) | static_cast<int64_t>(std::floor((1.0/4.0)*static_cast<int64_t>(std::floor((1.0/2.0)*u0))))) | std::floor((1.0/16.0)*(static_cast<int64_t>(std::floor((1.0/2.0)*u0)) # no cast to int here. | static_cast<int64_t>(std::floor((1.0/4.0)*static_cast<int64_t>(std::floor((1.0/2.0)*u0)))))); ``` Now, we use `c10::div_floor_integer` instead ``` int64_t var_6 = ( (c10::div_floor_integer(static_cast<int64_t>(u0), static_cast<int64_t>(2L))) | (c10::div_floor_integer(static_cast<int64_t>(u0), static_cast<int64_t>(8L)))) | (c10::div_floor_integer(static_cast<int64_t>((c10::div_floor_integer(static_cast<int64_t>(u0), static_cast<int64_t>(2L))) | (c10::div_floor_integer(static_cast<int64_t>(u0), static_cast<int64_t>(8L)))), static_cast<int64_t>(16L))); ``` Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145898 * #145802 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
true
2,816,969,715
[ONNX] Change constant folding and redundancies elimination to default in torch.onnx.export(..., dynamo=True)
titaiwangms
closed
[ "module: onnx", "triaged", "onnx-triaged" ]
1
COLLABORATOR
We should make the optimization a default setting, as it's more stable now. https://github.com/pytorch/pytorch/blob/2f24f2eb46102381430777af2985b6af6e5d2cec/torch/onnx/_internal/exporter/_onnx_program.py#L119 cc @xadupre @justinchuby @gramalingam
true
2,816,966,418
[1/N][cp][example] flex attention in context parallel (forward pass)
XilunWu
closed
[ "oncall: distributed", "Merged", "topic: not user facing", "ciflow/inductor", "module: context parallel" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #146397 * __->__ #145896 **Description** This is an example of how FlexAttention can be used in a context parallel fashion. Right now it's only a flex_attention call with collectives added and has no load balancer, but we're about to add the missing parts step by step: 1. backward pass 2. static load balancing for causal masking 3. dynamic load balancing for other general maskings 4. automatic collective insertion solution 5. non-intrusive context parallel APIs **Test** `torchrun --standalone --nnodes=1 --nproc-per-node=4 torch/distributed/tensor/examples/flex_attention_cp.py` cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
true
2,816,963,759
[BE] reduce log spew from test_triton_kernels.py
davidberard98
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145895 One of the tests in this file was setting `self._logging.set_logs(output_code=True)` - which would cause logs to be printed for the rest of the tests in this file. This PR puts the log-setting in a context manager so that the old behavior is restored afterwards. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,816,958,058
[Inductor-CPU] Add profiling support for codegened flex attention kernels
sanchitintel
closed
[ "open source", "Merged", "ciflow/trunk", "module: inductor", "ciflow/inductor", "release notes: inductor" ]
6
COLLABORATOR
### Summary `RECORD_FUNCTION` wasn't present in codegened Inductor-CPU Flex Attention C++ kernels, so flex attention kernels weren't present in the PyTorch profiler profiling data. Fixes #145825 by adding `RECORD_FUNCTION` calls in the codegened flex-attention kernels. ### Caveat #### _Before_ No corresponding results in PyTorch profiler profiling data #### _After_ | Inductor config settings | What kernel name looks like in profiling data | Comments| |-------------------|------------------------------------|--------------------| | Env variable `TORCHINDUCTOR_CPP_WRAPPER=1` OR `inductor.config.cpp_wrapper=1` in python code | `graph_x_cpp_fused_y` | No way to tell from the profiling results if the kernel is a GEMM kernel or an attention kernel | | `inductor.config.cpp.descriptive_names = "inductor_node"` but not CPP wrapper | `graph_x_kernel` | No way to tell from the profiling results if the kernel is a GEMM kernel or an attention kernel | | Both `inductor_config.cpp.descriptive_names = "inductor_node"` & Inductor CPP Wrapper | `graph_x_cpp_fused_flex_attention_y`| Easy to interpret data | | Neither of the two configs | `graph_x_kernel`| No way to tell from the profiling results if the kernel is a GEMM kernel or an attention kernel | cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,816,931,142
[PGNCCL] Correct some ifdef's
kwen2501
closed
[ "oncall: distributed", "Merged", "ciflow/trunk", "release notes: distributed (c10d)", "topic: not user facing" ]
5
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145964 * __->__ #145893 `create` function supporting `ncclConfig_t` should be wrapped inside `NCCL_HAS_CONFIG` instead of `NCCL_HAS_COMM_NONBLOCKING` cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
true
2,816,914,454
[dynamo][builtin-skipfiles-cleanup] remove abc, enum, importlib
anijain2305
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145804 * #145876 * #145909 * __->__ #145892 * #145878 * #145875 * #145856 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,816,909,919
[cutlass backend] update try_import_cutlass to accomodate for pip install
henrylhtsang
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145891 The goal of this PR is to provide 3 ways for people to try out CUTLASS backend: 1. fbcode / internal 2. pip install torch (nightly) and pip install nvidia-cutlass 3. build from source I will go into more detailed combos between building from source and downloading via pip for torch and cutlass. repro: ``` import torch import torch.nn as nn import torch._inductor.config as config config.force_disable_caches = True config.max_autotune = True config.max_autotune_gemm_backends = "CUTLASS" # the following is only needed if you use a custom cutlass library # config.cuda.cutlass_dir = "/data/users/henrylhtsang/cutlass" class TestModule(nn.Module): def forward(self, A, B): return A @ B model = TestModule().cuda() M, K, N = 2048, 2048, 2048 A = torch.randn(M, K).cuda().half() B = torch.randn(K, N).cuda().half() C = torch.compile(model, fullgraph=True)(A, B) ``` ## pre-requisite Assuming you have the right cuda toolkit. Recommend 12.4. Make sure PATH, LD_LIBRARY_PATH and CUDA_NVCC_EXECUTABLE are good. ## combo 1: pip install torch + pip install nvidia-cutlass Check https://pytorch.org/get-started/locally/ for **nightly** install command. ``` pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu124 pip install nvidia-cutlass ``` Then try running the script above. It should work. ## combo 2: build torch from source + pip install nvidia-cutlass This is going to be be pretty straightforward. Just keep in mind that even though pytorch/third_party/cutlass exists, the one that will be used is the pip package, so mindful of version differences. ## combo 3: build torch from source + use pytorch/third_party/cutlass This is how most pytorch devs would do it. Just make sure you don't have a cutlass pip package installed, i.e., make sure `import cutlass_library` would fail on its own. ## combo 4: any torch version + cutlass library from somewhere else This is probably the only case you need to pass in cutlass_dir. Just set cutlass_dir to the cutlass repo library. The expectations is that cutlass_dir is the directory that contains include, tool, and python/cutlass_library. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,816,900,868
[CD][MacOS] Don't install `libuv` from conda
malfet
closed
[ "topic: not user facing", "ciflow/binaries_wheel" ]
5
CONTRIBUTOR
As one can see from build logs, it's been build as part of TensorPipe dependency: ``` 2025-01-28T21:22:21.8630760Z [3444/5234] Building C object third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe_uv.dir/__/third_party/libuv/src/random.c.o 2025-01-28T21:22:21.8731260Z [3445/5234] Building C object third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe_uv.dir/__/third_party/libuv/src/fs-poll.c.o 2025-01-28T21:22:21.8833110Z [3446/5234] Building C object third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe_uv.dir/__/third_party/libuv/src/idna.c.o 2025-01-28T21:22:21.8934190Z [3447/5234] Building C object third_party/tensorpipe/tensorpipe/CMakeFiles/tensorpipe_uv.dir/__/third_party/libuv/src/inet.c.o ``` Partially addresses https://github.com/pytorch/pytorch/issues/145872 Test plan: Run https://pytorch.org/tutorials/intermediate/TCPStore_libuv_backend.html
true
2,816,896,028
[CD] Install OpenMP from homebrew
malfet
closed
[ "Merged", "release notes: releng", "topic: improvements", "ciflow/binaries_wheel" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145889 * #145870
true
2,816,888,043
[pytorch] raise exception when calling dim order on sparse tensor
Gasoonjia
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145888 This diff introduces a change to the PyTorch library that raises an exception when calling the `dim_order` method on a sparse tensor. Differential Revision: [D68797044](https://our.internmc.facebook.com/intern/diff/D68797044/)
true
2,816,882,504
update aotdispatcher_inference_subclass_cpu results
davidberard98
closed
[ "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
10
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145887 The regression comes from https://github.com/pytorch/pytorch/pull/145420. I'm not sure if there's a fixed way to determine what the new value should be - but I estimated a ~0.9% increase based on the last few days of benchmark results. <img width="953" alt="Screenshot 2025-01-28 at 2 39 39 PM" src="https://github.com/user-attachments/assets/59d8287c-6c67-4617-842e-38623ef664cf" /> cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,816,868,263
[export] Add distributed tests
angelayi
closed
[ "fb-exported", "ciflow/trunk", "topic: not user facing" ]
6
CONTRIBUTOR
Test Plan: CI Differential Revision: D68799386
true
2,816,867,600
Hacky solution to bad interaction between AOTAutogradcache and Triton 3.1
jamesjwu
closed
[ "Stale", "topic: not user facing", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145885
true
2,816,864,295
change the test wheel to release wheel when release wheel available
pytorchbot
closed
[ "open source" ]
4
COLLABORATOR
change the test wheel to release wheel when release wheel available
true
2,816,855,618
Inductor: don't reuse buffer if it would increase peak memory
eellison
open
[ "triaged", "oncall: pt2", "module: inductor", "internal ramp-up task" ]
0
CONTRIBUTOR
### 🚀 The feature, motivation and pitch Inductor has a config `allow_buffer_reuse` which will reuse a dead Tensor in memory allocation if the Tensor matches the newly allocated # of bytes. In some cases, if we are reusing a buffer during peak memory, this can increase memory usage. We should track current allocated and peak memory during inductor codegen. and only reuse a buffer if it does not increase peak memory. We already do a similar memory tracking [here](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/memory.py). See, [buffer reuse logic](https://github.com/pytorch/pytorch/blob/af43b445a5b03ffbeab1d430d2232f48dec3053d/torch/_inductor/codegen/wrapper.py#L492-L496). ### Alternatives master ### Additional context _No response_ cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,816,849,508
bump counters for unbacked binding names
avikchaudhuri
closed
[ "fb-exported", "Merged", "ciflow/trunk", "ciflow/inductor", "release notes: export" ]
5
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145882 Instead of bumping symint counters when we process unbacked bindings during deserialization, it's better to bump them at the beginning based on what the symbols in the original shape env before serialization were. This allows symbols in unbacked bindings to have "gaps" that bumping alone would not be able to match. Why is bumping counters important at all? It is because when the shape env coming out of deserialization is used later for propagating symints, say in run_decompositions, we don't want new names to clash with existing names (bad things happen). Differential Revision: [D68798191](https://our.internmc.facebook.com/intern/diff/D68798191/)
true
2,816,788,137
[dynamo][builtin-skipfiles-cleanup] Remove threading and multithreading
anijain2305
closed
[ "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * (to be filled) cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,816,788,004
[dynamo][builtin-skipfiles-cleanup] Remove operator
anijain2305
closed
[ "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145804 * #145881 * __->__ #145880 * #145879 * #145878 * #145876 * #145875 * #145856 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,816,787,653
[dynamo][bulitin-skipfiles-cleanup] Remove traceback
anijain2305
closed
[ "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145804 * #145881 * #145880 * __->__ #145879 * #145878 * #145876 * #145875 * #145856 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,816,782,063
[dynamo][builtin-skipfiles-cleanup] Remove threading, _collections_abc, _weakrefset, threading
anijain2305
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145804 * #145876 * #145909 * #145892 * __->__ #145878 * #145875 * #145856 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,816,760,654
[pytorch] Sprinkle in a few `template` keywords
VasuAgrawal
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing" ]
13
CONTRIBUTOR
Summary: These seem to be necessary to get compilation working on Windows with CUDA 12.8. I'm not sure whether this means that all of the previous compilers were broken, and the new one is better, or whether this is a regression in NVCC 12.8. Either way, as long as the CI passes for existing versions, this should unblock us from CUDA 12.8 enablement on Windows. See D68663662 for more details on the CUDA 12.8 enablement. Test Plan: CI! Reviewed By: akrieger Differential Revision: D68787925
true
2,816,759,508
[dynamo][builtin-skipfiles-cleanup] Remove copy
anijain2305
closed
[ "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145804 * __->__ #145876 * #145958 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,816,754,999
[dynamo][builtin-skipfiles-removal] Remove logging
anijain2305
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "module: dynamo", "ciflow/inductor" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145804 * #145876 * #145909 * #145892 * #145878 * __->__ #145875 * #145856 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,816,715,796
Use of broadcast_shapes() errors attempting to guard on symbolic nested int
jbschlosser
closed
[ "triaged", "module: nestedtensor", "oncall: pt2", "module: dynamic shapes" ]
0
CONTRIBUTOR
Repro: ```python import torch torch._dynamo.config.capture_dynamic_output_shape_ops = True torch._dynamo.config.capture_scalar_outputs = True nt = torch.nested.nested_tensor([ torch.randn(2), torch.randn(3), torch.randn(4), ], layout=torch.jagged, device="cuda") @torch.compile(fullgraph=True) def f(t, mask): nt = torch.nested.masked_select(t, mask) return torch.where(nt > 0., torch.ones_like(nt), torch.zeros_like(nt)) t = torch.randn(3, 5) mask = torch.randint(0, 2, t.shape, dtype=torch.bool) output = f(t, mask) ``` Problem: `torch.nested.masked_select()` constructs an NJT in-graph, which invokes a special path for constructing a new symbolic nested int. Attempting to guard on this (due to usage of `where()` -> `broadcast_shapes()` -> `Ne(s1, 1)` guard on the symbolic nested int) gives this error: ``` ... AssertionError: s0 (could be from ['<ephemeral: intermediate_offsets_or_lengths>']) not in {s0: []}. If this assert is failing, it could be due to the issue described in https://github.com/pytorch/pytorch/pull/90665 ``` cc @cpuhrsch @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @chauhang @penguinwu @ezyang @bobrenjc93
true
2,816,697,644
[WIP] Allow generation of inductor backend specific tests using instantiate_device_type_tests
kundaMwiza
open
[ "open source", "module: inductor" ]
2
CONTRIBUTOR
This allows the creation of inductor backend specific test classes. Since this is an extension point for out of tree backends, it also allows out of tree backends to customise test instantiation to fit their backend / device. To maintain backwards compatibility, `only_inductor_backends` defaults to `None` so that the behaviour of test class instantiation matches the incumbent behaviour. If `only_inductor_backends` is not None, inductor backend specific test classes will be created from a test template, e.g. `TestInductorOpInfo -> TestInductorOpInfoTritonCUSTOMDEVICE, TestInductorOpInfoHalideCUSTOMDEVICE` An illustration of the before/after changes: ```python # in test_inductor.py # Inductor test template class TestInductor: def test_comprehensive(...) # Original instantiate_device_type_tests(TestSuiteTemplate) # Generates something like this: # TestInductorCPU # TestInductorCUDA # After changes instantiate_device_type_tests(TestSuiteTemplate, enable_inductor_backend_classes=True, only_inductor_backends=["cpp", "triton"]) # TestInductorCppCPU # TestInductorCppTriton # TestInductorTritonCUDA # Additionally, the new test classes if a native inductor backend is used are guarded # e.g. TestInductorCppCPU # is equivalent to the following class definition # @skipUnless(HAS_CPU, "Requires C++ compiler") # @config.patch("cpu_backend", "cpp") # class TestInductorCppCPU(CPUTestBase) # ... ``` Fixes #ISSUE_NUMBER cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,816,690,663
[binary builds] Anaconda. Remove dependency on conda libuv module in MacOS and Windows nightly builds
atalman
closed
[ "oncall: releng", "triaged", "topic: binaries" ]
6
CONTRIBUTOR
### 🐛 Describe the bug Related to: https://github.com/pytorch/pytorch/issues/138506 In Windows and MacOS wheel builds we use libuv as build time dependency. In Windows we use lubuv during nightly build: .ci/pytorch/windows/condaenv.bat On MacOS : .ci/wheel/build_wheel.sh .github/requirements/conda-env-macOS-ARM64 This package is available via conda : https://anaconda.org/anaconda/libuv And not available via pip install. As the first step in refactoring workflow that dependent on conda we want to refactor the code to use different install method for this package. Maybe installing via download https://github.com/libuv/libuv#downloading ? If possible one should try to use same method of installation that would work in Windows and MacOS. ### Versions 2.7.0
true
2,816,690,584
[CD] Install ninja and setuptools from PyPI
malfet
closed
[ "Merged", "Reverted", "topic: not user facing", "ciflow/binaries_wheel", "ci-no-td" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145889 * #145870 * __->__ #145871 As well as typing extensions, they are available from PyPI, no reason to install them from Anaconda
true
2,816,679,092
[CMake] Find HomeBrew OpenMP on MacOS
malfet
closed
[ "Merged", "Reverted", "ciflow/trunk", "release notes: build", "topic: improvements", "ciflow/binaries_wheel", "ci-no-td" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145889 * __->__ #145870 Either via `OMP_PREFIX` envvar or by searching in `/opt/homebrew/opt/libomp` folder Modify libomp bundling logic in setup.py to change absolute path to libomp.dylib to a relative one if necessary
true
2,816,676,523
Compiled flex bias appears to not work at all
leijurv
open
[ "triaged", "bug", "oncall: pt2", "module: higher order operators", "module: pt2-dispatcher", "module: flex attention" ]
5
NONE
### 🐛 Describe the bug ```python import torch from torch.nn.attention.flex_attention import flex_attention N_BATCH = 16 N_HEAD = 32 N_CTX = 4096 D_QK = 128 torch.set_default_device("cuda") @torch.compile def main(): q = torch.randn(N_BATCH, N_HEAD, N_CTX, D_QK, requires_grad=True) k = torch.randn(N_BATCH, N_HEAD, N_CTX, D_QK, requires_grad=True) v = torch.randn(N_BATCH, N_HEAD, N_CTX, D_QK, requires_grad=True) # source: https://github.com/pytorch/pytorch/blob/53fc921ce2bcfd29b0adc42b72b86a982a690e30/test/inductor/test_flex_attention.py#L2978 #bias_1 = torch.randn(N_BATCH, N_HEAD, N_CTX, N_CTX, requires_grad=True) #def score_mod(score, b, h, q_idx, kv_idx): # return score + bias_1[b, h, q_idx, kv_idx] # illegal memory access # source: https://github.com/pytorch/pytorch/blob/53fc921ce2bcfd29b0adc42b72b86a982a690e30/test/inductor/test_flex_attention.py#L4714 #bias_2 = torch.randn(N_CTX, N_CTX, requires_grad=True) #def score_mod(score, b, h, q_idx, kv_idx): # return score + bias_2[q_idx, kv_idx] # AssertionError: size=[16, 32, 4096, 128], stride=[4096, 1] # source: https://github.com/pytorch/pytorch/blob/53fc921ce2bcfd29b0adc42b72b86a982a690e30/test/inductor/test_flex_attention.py#L209 #bias_3 = torch.randn(N_HEAD, requires_grad=True) #def score_mod(score, b, h, q_idx, kv_idx): # return score * bias_3[h] # AssertionError: size=[16, 32, 4096, 128], stride=[1] flex_attention(q, k, v, score_mod=score_mod).sum().backward() main() ``` I have taken three examples of bias from the flex tests, so I expect them to work. The first one gets an illegal memory access. The second and third get deep assertion errors. This is on latest nightly. ### Versions ``` Collecting environment information... PyTorch version: 2.7.0.dev20250128+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.4.131 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA H100 PCIe Nvidia driver version: 550.127.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 26 On-line CPU(s) list: 0-25 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8480+ CPU family: 6 Model: 143 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 26 Stepping: 8 BogoMIPS: 4000.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities Virtualization: VT-x Hypervisor vendor: KVM Virtualization type: full L1d cache: 832 KiB (26 instances) L1i cache: 832 KiB (26 instances) L2 cache: 104 MiB (26 instances) L3 cache: 416 MiB (26 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-25 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Unknown: No mitigations Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled Versions of relevant libraries: [pip3] flake8==4.0.1 [pip3] numpy==1.21.5 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] optree==0.13.1 [pip3] pytorch-triton==3.2.0+gitb2684bf3 [pip3] torch==2.7.0.dev20250128+cu124 [pip3] triton==3.1.0 [conda] Could not collect ``` cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
true
2,816,661,559
Fix code cache + freezing compile-time regression
masnesral
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145868 Summary: The current implementation introduces a compile-time regression due to overhead hashing large constants. To support freezing+caching, we consider only the tensor metadata of frozen params, but we neglect to do the same for any constants created as a result of folding frozen params. This PR Explicitly marks the constants created during freezing (and constant folding during freezing) and uses that info in the inductor cache to determine when to hash a tensor value+metadata vs. metadata only. Test Plan: `python benchmarks/dynamo/torchbench.py --backend inductor --device cuda --only alexnet --bfloat16 --cold-start-latency --print-compilation-time --inference --performance --freezing` cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,816,653,998
[triton] Update pin to tip of 3.2 release
bertmaher
closed
[ "Merged", "Reverted", "ciflow/trunk", "topic: not user facing", "ciflow/inductor", "ci-no-td" ]
13
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145867
true
2,816,650,624
[pytorch][cuda] Improve softmax backward pass native CUDA implementation
ahmadsharif1
closed
[ "Merged", "ciflow/trunk", "release notes: cuda" ]
10
CONTRIBUTOR
This PR is similar to https://github.com/pytorch/pytorch/pull/122970, but works on the softmax backward pass. Specifically, it uses shared memory to cache the gradOutput when it can fit in shared memory. Before this PR we were reading gradOutput twice. On my H100 this seems to improve the softmax backward pass performance by about 5% for problem sizes that fit within shared memory. (Note that this is not the only kernel that runs when you call softmax backward pass -- there is an elementwise kernel that runs before this; optimizing that can be a separate PR). **Important Note**: Currently the softmax backward pass consists of an [element-wise multiply operator](https://github.com/pytorch/pytorch/blob/7f65a208848205b38445423b7e2e93a2b4994e5e/aten/src/ATen/native/cuda/SoftMax.cu#L1216), followed by [this function](https://github.com/pytorch/pytorch/blob/7f65a208848205b38445423b7e2e93a2b4994e5e/aten/src/ATen/native/cuda/SoftMax.cu#L1062) which calls the `cunn_SoftMaxBackward` kernel. With my change the kernel time reduces by about 12% (see screenshot below), while the total time (including the elementwise) reduces by about 5%. ``` Baseline This PR N size FP32 bandwidth FP16 bandwidth N size FP32 bandwidth FP16 bandwidth fp32 diff fp16 diff 0 256 134.340966 70.042039 0 256 133.70146 70.342753 -0.48% 0.43% 1 512 233.501185 129.945803 1 512 234.057145 132.933066 0.24% 2.30% 2 1024 340.667966 229.280464 2 1024 338.833265 226.441699 -0.54% -1.24% 3 2048 379.643726 337.452058 3 2048 399.559017 338.432284 5.25% 0.29% 4 4096 416.597537 383.625364 4 4096 428.252403 396.137506 2.80% 3.26% 5 6000 431.198241 384.384384 5 6000 457.744577 406.06275 6.16% 5.64% 6 8192 462.811252 427.292573 6 8192 474.791032 428.281563 2.59% 0.23% 7 10000 464.258731 429.050294 7 10000 483.7643 446.849381 4.20% 4.15% 8 10013 465.199701 429.824179 8 10013 464.904407 428.72184 -0.06% -0.26% 9 10240 477.07359 428.853737 9 10240 485.317024 444.902586 1.73% 3.74% 10 11000 473.038785 430.778663 10 11000 488.161438 453.462162 3.20% 5.27% 11 12000 474.342475 432.594814 11 12000 490.532418 458.427653 3.41% 5.97% 12 16384 487.468854 473.611576 12 16384 488.154406 476.264631 0.14% 0.56% 13 20000 482.029793 465.666186 13 20000 482.147092 483.886193 0.02% 3.91% 14 24000 478.368093 474.159464 14 24000 478.364948 491.447921 0.00% 3.65% 15 32000 476.523796 473.18868 15 32000 476.523796 474.398962 0.00% 0.26% 16 32768 476.104723 477.493634 16 32768 476.704463 477.330606 0.13% -0.03% 17 36864 477.900663 475.472787 17 36864 477.973279 475.728454 0.02% 0.05% 18 40960 477.707561 475.559064 18 40960 478.445017 476.088067 0.15% 0.11% 19 45056 479.169812 475.865134 19 45056 479.143266 475.878202 -0.01% 0.00% 20 49152 477.804907 475.382982 20 49152 477.868404 475.976377 0.01% 0.12% 21 65536 481.274125 478.171806 21 65536 481.537733 478.703926 0.05% 0.11% 22 66000 481.64652 480.095457 22 66000 481.856013 480.466388 0.04% 0.08% 23 68608 481.745774 479.034704 23 68608 481.917596 478.856209 0.04% -0.04% 24 80000 483.409361 480.356529 24 80000 483.330481 480.375277 -0.02% 0.00% 25 98304 480.736301 481.396882 25 98304 480.789858 481.320143 0.01% -0.02% ``` NCU profiler shows lower DRAM fetches with the new kernel: ![image](https://github.com/user-attachments/assets/f3606725-d8fc-4ea5-ae6d-9c188bf32d72) NCU reports about 12% elapsed time reduction in this kernel alone compared to baseline (and because of other kernels that are run, the overall backward pass time as seen by the user gets reduced by 5%). I compared the binary size increase by running `python setup.py develop` before and after and diffing the .so files: ![image](https://github.com/user-attachments/assets/8e6cee2e-3c7a-4fa4-8836-954047ce8ffc) libtorch_cuda.so goes from 274,752,224 bytes to 274,787,072 bytes. The increase in size is 34kB which is about 0.01%. I measured the compilation time for incremental development: ``` touch ./aten/src/ATen/native/cuda/SoftMax.cu time python setup.py develop real 0m10.083s user 0m8.197s sys 0m3.149s ``` Note that this uses `ccache` and does a bunch of copies and is not just measuring the `nvcc` time. I measured the `nvcc` time separately by capturing the `nvcc` command shown in [1] below and running it on the baseline and modified kernels: ``` # baseline nvcc time for SoftMax.cu real 0m35.341s user 0m33.801s sys 0m1.289s # this PR's nvcc time for SoftMax.cu real 0m36.513s user 0m34.722s sys 0m1.408s ``` So the `nvcc` time increases by about 1 second, or ~3% of the baseline. [1] `nvcc` command is here: ``` # This is the nvcc command /usr/local/cuda/bin/nvcc -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_BUILD_MAIN_LIB -DTORCH_CUDA_USE_NVTX3 -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_FLASH_ATTENTION -DUSE_MEM_EFF_ATTENTION -DUSE_NCCL -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cuda_EXPORTS -I/home/ahmads/personal/pytorch/build/aten/src -I/home/ahmads/personal/pytorch/aten/src -I/home/ahmads/personal/pytorch/build -I/home/ahmads/personal/pytorch -I/home/ahmads/personal/pytorch/cmake/../third_party/benchmark/include -I/home/ahmads/personal/pytorch/third_party/onnx -I/home/ahmads/personal/pytorch/build/third_party/onnx -I/home/ahmads/personal/pytorch/nlohmann -I/home/ahmads/personal/pytorch/aten/src/THC -I/home/ahmads/personal/pytorch/aten/src/ATen/cuda -I/home/ahmads/personal/pytorch/third_party/fmt/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/tools/util/include -I/home/ahmads/personal/pytorch/build/caffe2/aten/src -I/home/ahmads/personal/pytorch/aten/src/ATen/.. -I/home/ahmads/personal/pytorch/build/nccl/include -I/home/ahmads/personal/pytorch/c10/cuda/../.. -I/home/ahmads/personal/pytorch/c10/.. -I/home/ahmads/personal/pytorch/third_party/tensorpipe -I/home/ahmads/personal/pytorch/build/third_party/tensorpipe -I/home/ahmads/personal/pytorch/third_party/tensorpipe/third_party/libnop/include -I/home/ahmads/personal/pytorch/torch/csrc/api -I/home/ahmads/personal/pytorch/torch/csrc/api/include -isystem /home/ahmads/personal/pytorch/build/third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/ahmads/personal/pytorch/third_party/protobuf/src -isystem /home/ahmads/personal/pytorch/third_party/XNNPACK/include -isystem /home/ahmads/personal/pytorch/third_party/ittapi/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/ahmads/personal/pytorch/torch/include -isystem /home/ahmads/personal/pytorch/third_party/ideep/include -isystem /home/ahmads/personal/pytorch/torch/include/oneapi/dnnl -isystem /home/ahmads/personal/pytorch/INTERFACE -isystem /home/ahmads/personal/pytorch/third_party/nlohmann/include -isystem /home/ahmads/personal/pytorch/third_party/NVTX/c/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/cudnn_frontend/include -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -D_GLIBCXX_USE_CXX11_ABI=1 -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_90,code=sm_90 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -O3 -DNDEBUG -std=c++17 -Xcompiler=-fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Xcompiler -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-but-set-variable -Wno-maybe-uninitialized -MD -MT caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/SoftMax.cu.o -MF caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/SoftMax.cu.o.d -x cu -c /home/ahmads/personal/pytorch/aten/src/ATen/native/cuda/SoftMax.cu -o caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/SoftMax.cu.o ```
true
2,816,641,402
[WIP] Add test_torchinductor_opinfo.py to triton-cpu tests
davidberard98
closed
[ "Stale", "ciflow/inductor", "keep-going" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145865 I'm guessing this isn't going to pass, but I want to see how long the CI takes.
true
2,816,546,277
Segmentation fault at _rowwise_prune with convert_hf_to_gguf from llama.cpp
atmaranto
open
[ "module: crash", "module: windows", "triaged", "module: 64-bit" ]
2
NONE
### 🐛 Describe the bug When converting the DeepSeek R1 LLAMA-70B distilled model to a GGUF file, the program [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py) silently crashes. Attaching to WinDbg, it appears torch_cpu causes a segfault: ``` (cfd4.d19c): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. torch_cpu!at::native::_rowwise_prune+0x30b: 00007ffc`ba8da7bb 0fb610 movzx edx,byte ptr [rax] ds:00000000`5e001424=?? (some omitted for brevity) 0:000> K # Child-SP RetAddr Call Site 00 000000a3`7c7ed990 00007ffc`ba8dafbf torch_cpu!at::native::_rowwise_prune+0x30b 01 000000a3`7c7eda60 00007ffc`bb496ace torch_cpu!at::native::_local_scalar_dense_cpu+0x3f 02 000000a3`7c7edab0 00007ffc`bb46e001 torch_cpu!at::cpu::where_outf+0x1a7e 03 000000a3`7c7edae0 00007ffc`bb0617f4 torch_cpu!at::cpu::bucketize_outf+0x41271 04 000000a3`7c7edb10 00007ffc`ba8db14f torch_cpu!at::_ops::_local_scalar_dense::call+0x154 05 000000a3`7c7edc20 00007ffc`bb78fe8e torch_cpu!at::native::item+0x17f 06 000000a3`7c7edcb0 00007ffc`bb76fe31 torch_cpu!at::compositeimplicitautograd::where+0x3cce 07 000000a3`7c7edce0 00007ffc`baf1d954 torch_cpu!at::compositeimplicitautograd::broadcast_to_symint+0x33e01 08 000000a3`7c7edd10 00007ffc`bbbd1cd3 torch_cpu!at::_ops::item::call+0x154 09 000000a3`7c7ede20 00007ffd`62ad8507 torch_cpu!at::Tensor::item<unsigned char>+0x13 0a 000000a3`7c7ede70 00007ffd`626cef39 torch_python!pybind11::detail::type_caster<at::Tensor,void>::load+0x2b7 0b 000000a3`7c7edee0 00007ffd`b5174e86 torch_python!isMainPyInterpreter+0x3ae9 0c 000000a3`7c7ee060 00007ffd`b5296fc3 python312!_PyObject_Call+0xd6 [\Objects\call.c @ 369] 0d 000000a3`7c7ee0a0 00007ffd`b5175034 python312!_PyEval_EvalFrameDefault+0x6bd3 [\PCbuild\Python\bytecodes.c @ 3263] 0e 000000a3`7c7ee290 00007ffd`b51dd37f python312!_PyFunction_Vectorcall+0x54 [\Objects\call.c @ 424] 0f 000000a3`7c7ee2d0 00007ffd`b51e1802 python312!_PyObject_VectorcallTstate+0x4f [\Include\internal\pycore_call.h @ 92] 10 (Inline Function) --------`-------- python312!vectorcall_unbound+0x26 [\Objects\typeobject.c @ 2236] 11 000000a3`7c7ee310 00007ffd`b51ec939 python312!vectorcall_method+0xd2 [\Objects\typeobject.c @ 2268] 12 000000a3`7c7ee370 00007ffd`62b20371 python312!slot_sq_item+0x49 [\Objects\typeobject.c @ 8493] 13 000000a3`7c7ee3b0 00007ffd`62b2158d torch_python!torch::utils::getTHPMemoryFormat+0x2891 14 000000a3`7c7ee530 00007ffd`62b1f8ea torch_python!torch::utils::getTHPMemoryFormat+0x3aad 15 000000a3`7c7ee800 00007ffd`62757515 torch_python!torch::utils::getTHPMemoryFormat+0x1e0a 16 000000a3`7c7eea30 00007ffd`b51c1a93 torch_python!THPPointer<THPStorage>::THPPointer<THPStorage>+0x2a095 17 000000a3`7c7eec60 00007ffd`b5174e86 python312!cfunction_call+0x63 [\Objects\methodobject.c @ 540] 18 000000a3`7c7eec90 00007ffd`da95aa0d python312!_PyObject_Call+0xd6 [\Objects\call.c @ 369] 19 000000a3`7c7eecd0 00007ffd`da931283 _safetensors_rust_cp312_win_amd64!PyInit__safetensors_rust+0x1626d 1a 000000a3`7c7eed40 00007ffd`da933be9 _safetensors_rust_cp312_win_amd64+0x11283 1b 000000a3`7c7ef080 00007ffd`b52914e0 _safetensors_rust_cp312_win_amd64+0x13be9 ``` To reproduce the issue, download [this model](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) and run within the llama.cpp main directory: ```bash python convert_hf_to_gguf.py model_path ``` ### Versions ``` PyTorch version: 2.5.1 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Microsoft Windows 10 Pro (10.0.19045 64-bit) GCC version: Could not collect Clang version: Could not collect CMake version: version 3.28.0-rc5 Libc version: N/A Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:06:27) [MSC v.1942 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.19045-SP0 Is CUDA available: True CUDA runtime version: 12.6.85 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 Nvidia driver version: 566.36 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Name: 13th Gen Intel(R) Core(TM) i5-13600K Manufacturer: GenuineIntel Family: 205 Architecture: 9 ProcessorType: 3 DeviceID: CPU0 CurrentClockSpeed: 3500 MaxClockSpeed: 3500 L2CacheSize: 8192 L2CacheSpeed: None Revision: None Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] torch==2.5.1 [pip3] torchaudio==2.5.1 [pip3] torchvision==0.20.1 [conda] blas 1.0 mkl [conda] cuda-cudart 12.4.127 0 nvidia [conda] cuda-cudart-dev 12.4.127 0 nvidia [conda] cuda-cupti 12.4.127 0 nvidia [conda] cuda-libraries 12.4.1 0 nvidia [conda] cuda-libraries-dev 12.4.1 0 nvidia [conda] cuda-nvrtc 12.4.127 0 nvidia [conda] cuda-nvrtc-dev 12.4.127 0 nvidia [conda] cuda-nvtx 12.4.127 0 nvidia [conda] cuda-opencl 12.6.77 0 nvidia [conda] cuda-opencl-dev 12.6.77 0 nvidia [conda] cuda-runtime 12.4.1 0 nvidia [conda] libcublas 12.4.5.8 0 nvidia [conda] libcublas-dev 12.4.5.8 0 nvidia [conda] libcufft 11.2.1.3 0 nvidia [conda] libcufft-dev 11.2.1.3 0 nvidia [conda] libcurand 10.3.7.77 0 nvidia [conda] libcurand-dev 10.3.7.77 0 nvidia [conda] libcusolver 11.6.1.9 0 nvidia [conda] libcusolver-dev 11.6.1.9 0 nvidia [conda] libcusparse 12.3.1.170 0 nvidia [conda] libcusparse-dev 12.3.1.170 0 nvidia [conda] libnvjitlink 12.4.127 0 nvidia [conda] libnvjitlink-dev 12.4.127 0 nvidia [conda] mkl 2023.1.0 h6b88ed4_46358 [conda] mkl-service 2.4.0 py312h2bbff1b_1 [conda] mkl_fft 1.3.11 py312h827c3e9_0 [conda] mkl_random 1.2.8 py312h0158946_0 [conda] numpy 1.26.4 py312hfd52020_0 [conda] numpy-base 1.26.4 py312h4dde369_0 [conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9_0 pytorch [conda] pytorch-cuda 12.4 h3fd98bf_7 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 2.5.1 pypi_0 pypi [conda] torchvision 0.20.1 pypi_0 pypi ``` cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
true
2,816,540,796
Cleanup VS 2019 refs in pytorch
Camyll
closed
[ "Merged", "ciflow/binaries", "ciflow/trunk", "release notes: releng", "test-config/default" ]
10
CONTRIBUTOR
Related to: https://github.com/pytorch/pytorch/issues/128835 Follow up on PR: https://github.com/pytorch/pytorch/pull/145319
true
2,816,533,217
Extend abi-stable nitpick message to all the c stable files
albanD
closed
[ "Merged", "ciflow/trunk", "topic: not user facing" ]
4
COLLABORATOR
null
true
2,816,532,662
CPU Model compile not working for flexattention
tpopok
closed
[ "oncall: pt2", "oncall: cpu inductor" ]
6
NONE
### 🐛 Describe the bug Use CPU model (i.e., CUDA_VISIBLE_DEVICES=-1) to run the following code will throw error. ``` from torch.utils.data import DataLoader, Dataset import torch from torch import nn from torch.nn.attention.flex_attention import flex_attention, create_block_mask, _create_empty_block_mask, BlockMask # flex_attention = torch.compile(flex_attention) # Create a simple custom dataset class MyDataset(Dataset): def __init__(self): self.data = torch.randn(100, 32) self.labels = torch.randint(0, 2, (100,)) def __len__(self): return 10000000 def __getitem__(self, idx): return self.data[idx % 100], self.labels[idx % 100] class MyModel(nn.Module): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.loss = torch.nn.MSELoss() self.q_proj = nn.LazyLinear(128) self.k_proj = nn.LazyLinear(128) self.v_proj = nn.LazyLinear(128) def forward(self, x, y): x = x[None, None, :, :] r = x q = self.q_proj(x) k = self.k_proj(x) v = self.v_proj(x) r = flex_attention(q, k, v) return self.loss(torch.sum(r[0, 0, :, :], dim=-1), y) dataset = MyDataset() data_loader = DataLoader(dataset, batch_size=3, shuffle=True) model = MyModel() model.compile() for x, y in data_loader: print(model(x, y)) break ``` Error trace: ``` No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda' Traceback (most recent call last): File "/data/mizhou/workspace/torch_p13n_embedding/py/tmp/test_flex_attn.py", line 44, in <module> print(model(x, y)) ^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1737, in _wrapped_call_impl return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__ return self._torchdynamo_orig_callable( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__ result = self._inner_convert( ^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__ return _compile( ^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner return _compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_utils_internal.py", line 95, in wrapper_function return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner out_code = transform_code_object(code, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object transformations(instructions, code_options) File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform tracer.run() File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run super().run() File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run while self.step(): ^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step self.dispatch_table[inst.opcode](self, inst) File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE self._return(inst) File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return self.output.compile_subgraph( File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph self.compile_and_call_fx_graph( File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler return self._call_user_compiler(gm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler compiled_fn = compiler_fn(gm, self.example_inputs()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__ compiled_gm = compiler_fn(gm, example_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/__init__.py", line 2340, in __call__ return compile_fx(model_, inputs_, config_patches=self.config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx return aot_autograd( ^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__ cg = aot_module_simplified(gm, example_inputs, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified compiled_fn = dispatch_and_compile() ^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile compiled_fn, _ = create_aot_dispatcher_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function return _create_aot_dispatcher_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function compiled_fn, fw_metadata = compiler_fn( ^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 678, in aot_dispatch_autograd compiled_fw_func = aot_config.fw_compiler(fw_module, adjusted_flat_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__ return self.compiler_fn(gm, example_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base return inner_compile( ^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper inner_compiled_fn = compiler_fn(gm, example_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 685, in _compile_fx_inner mb_compiled_graph = fx_codegen_and_compile( ^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile graph.run(*example_inputs) File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 855, in run return super().run(*args) ^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/fx/interpreter.py", line 167, in run self.env[node] = self.run_node(node) ^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1496, in run_node result = super().run_node(n) ^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/fx/interpreter.py", line 230, in run_node return getattr(self, n.op)(n.target, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1074, in call_function return super().call_function(target, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/mizhou/workspace/torch_p13n_embedding/.venv_torch26/lib/python3.12/site-packages/torch/fx/interpreter.py", line 310, in call_function return target(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: IndexError: tuple index out of range Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True ``` ### Versions PyTorch version: 2.6.0+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.39 Python version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-5.10.226-214.879.amzn2.x86_64-x86_64-with-glibc2.39 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB Nvidia driver version: 550.90.12 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz Stepping: 7 CPU MHz: 3582.060 BogoMIPS: 5999.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 1.5 MiB L1i cache: 1.5 MiB L2 cache: 48 MiB L3 cache: 71.5 MiB NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Vulnerable Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==2.1.3 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] torch==2.6.0+cu124 [pip3] torchmetrics==1.0.3 [pip3] torchrec==1.1.0+cu124 [pip3] triton==3.2.0 [conda] numpy 1.23.5 pypi_0 pypi [conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi [conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi [conda] torch 2.3.1 pypi_0 pypi [conda] torchaudio 2.3.1 pypi_0 pypi [conda] torchvision 0.18.1 pypi_0 pypi [conda] triton 2.3.1 pypi_0 pypi cc @chauhang @penguinwu
true
2,816,490,704
[ONNX] Support subgraphs with 1+ outputs
justinchuby
closed
[ "module: onnx", "open source", "Merged", "ciflow/trunk", "release notes: onnx", "topic: bug fixes" ]
14
COLLABORATOR
Fixed a bug in _handle_output_node where additional output values were not added as graph outputs Fixes #145734
true
2,816,456,758
[be][pytorch] Fix backend in autocast
nautsimon
closed
[ "fb-exported", "module: amp (automated mixed precision)", "Merged", "ciflow/trunk", "topic: not user facing" ]
4
MEMBER
Summary: fixing backend typo (BAKCNEDS -> BACKENDS) Test Plan: ci Differential Revision: D68573324 cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
true
2,816,409,441
Copy model before benchmark warmup runs
angelayi
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
5
CONTRIBUTOR
Fixes https://github.com/pytorch/pytorch/issues/144772 The eager warmup runs causes the model to change state so that later when we export it, the model is different than when we export it directly out of box. For some reason exporting the model with the changed state causes issues but exporting the inital model is ok. This is the reason why the accuracy checks pass but the performance check fails when exporting. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,816,401,864
[export] don't always print GM in serdes logging
pianpwk
closed
[ "fb-exported", "Merged", "ciflow/trunk", "ciflow/inductor", "release notes: export" ]
4
CONTRIBUTOR
Summary: Didn't realize print_readable() would also print and not just return string Test Plan: . Differential Revision: D68781525
true
2,816,393,274
[dynamo][builtin-skipfiles-cleanup] Remove some builtins
anijain2305
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
7
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145804 * #145876 * #145909 * #145892 * #145878 * #145875 * __->__ #145856 [dynamo][builtin-skipfiles-cleanup] Remove more builtins cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,816,339,309
[c10d][ez] Remove goto in PGNCCL and make linter happy for PGNCCL and NCCLUtils
fduwjj
closed
[ "oncall: distributed", "Merged", "ciflow/trunk", "release notes: distributed (c10d)" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145855 While working on PGNCCL I found that the code triggers some lint warnings so this PR is to address them or add lint suppressor. cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
true
2,816,289,764
[c10d][UCC] Support coalesing in `c10d::ProcessGroupUCC` through alltoallv
samnordmann
closed
[ "oncall: distributed", "triaged", "open source", "Stale", "release notes: distributed (c10d)" ]
3
CONTRIBUTOR
# What Add support for coalescing communication in `ProcessGroupUCC`, by implementing the methods `startCoalescing` and `endCoalescing`. We need to impose several restrictions to the coalesced group: 1) we can only coalesce `send` and `recv` ops 2) we can only coalesce one `send` and one `recv` maximum per pair of ranks 3) all ranks must participate in the `startCoalescing` and `endCoalescing` calls, even if the group is empty. 4) we do not support tags for coalesced groups. # Why Despite the above restrictions, we cover a number of useful data patterns, such as ring p2p, allgather, broadcast, alltoall, etc. Those data patterns or other custom ones are conveniently written in terms of coalesced send/recv calls, which this patch makes possible. Recall that for a p2p bidirectional transfer, the send and recv need to be coalesced to enjoy full bidirectional bandwidth # How Since UCC does not natively support Coalescing, we implement it at the ProcessGroup level. The implementation relies on calling UCC's alltoallv, setting the count to `0` and displacement to `nullptr` for ranks that do not exchange data. cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
true
2,816,265,454
cmake: fix detection logic when using system XNNPACK
Harry-Chen
open
[ "triaged", "open source", "Stale", "topic: not user facing" ]
6
NONE
This commit makes the following improvements: * The "elseif" branch now only runs when USE_XNNPACK is set. * Use "REQUIRED" to enforce the existence of XNNPACK libraries, and remove the erroneous if statement ('or' should be 'OR'). * libmicrokernels-prod is built statically in XNNPACK [1], change in pytorch side accordingly. [1]: https://github.com/google/XNNPACK/blob/d7f398ee5e135ef4f7045802eea973cc6cb26c6c/CMakeLists.txt#L819
true
2,816,243,859
RecursionError: maximum recursion depth exceeded in comparison
vpandya-quic
open
[ "triaged", "oncall: pt2", "module: dynamic shapes", "module: inductor" ]
3
NONE
### 🐛 Describe the bug Running following small tests results in recursion ```py def test_a(): def process_tensors( in_ptr0, in_ptr1, in_ptr2, in_ptr3, in_ptr4, out_ptr0, out_ptr1 ): for x0 in range(32): for x1 in range(6): for x2 in range(64): tmp0 = in_ptr0[x0, x1, x2] tmp1 = in_ptr1[x1, x2] tmp6 = in_ptr2[x0, x1, x2] tmp13 = in_ptr3[x0 // 8, x1, x2] tmp15 = in_ptr4[x0 // 8, x1, x2] tmp2 = torch.cos(tmp1) tmp3 = 1.0 tmp4 = tmp2 * tmp3 tmp5 = tmp0 * tmp4 tmp7 = torch.sin(tmp1) tmp8 = tmp7 * tmp3 tmp9 = tmp6 * tmp8 tmp10 = tmp5 + tmp9 tmp11 = 0.3535533905932738 tmp12 = tmp10 * tmp11 tmp14 = tmp13 * tmp4 tmp16 = tmp15 * tmp8 tmp17 = tmp14 + tmp16 tmp18 = tmp17 * tmp11 out_ptr0[x0, x1, x2] = tmp12 out_ptr1[x0, x1, x2] = tmp18 # Example usage: with torch.no_grad(): in_ptr0 = torch.randn(32, 6, 64) in_ptr1 = torch.randn(6, 64) in_ptr2 = torch.randn(32, 6, 64) in_ptr3 = torch.randn(4, 6, 64) in_ptr4 = torch.randn(4, 6, 64) out_ptr0 = torch.zeros(32, 6, 64) out_ptr1 = torch.zeros(32, 6, 64) out_ptr0_ = torch.zeros(32, 6, 64) out_ptr1_ = torch.zeros(32, 6, 64) compiled_fn = torch.compile( process_tensors, backend="inductor", fullgraph=True, ) process_tensors(in_ptr0, in_ptr1, in_ptr2, in_ptr3, in_ptr4, out_ptr0, out_ptr1) compiled_fn(in_ptr0, in_ptr1, in_ptr2, in_ptr3, in_ptr4, out_ptr0_, out_ptr1_) ``` ### Error logs [log_.txt](https://github.com/user-attachments/files/18576776/log_.txt) ### Versions PyTorch version: 2.4.1+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 19.1.5 (++20241125104649+086d8e6bb5da-1~exp1~20241125104703.66) CMake version: version 3.29.0 Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 52 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: AuthenticAMD Model name: AMD EPYC 9124 16-Core Processor CPU family: 25 Model: 17 Thread(s) per core: 1 Core(s) per socket: 16 Socket(s): 2 Stepping: 1 Frequency boost: disabled CPU max MHz: 3711.9141 CPU min MHz: 1500.0000 BogoMIPS: 5991.11 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d Virtualization: AMD-V L1d cache: 1 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 32 MiB (32 instances) L3 cache: 128 MiB (8 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] onnx==1.16.0 [pip3] onnxruntime==1.16.3 [pip3] onnxscript==0.1.0.dev20240327 [pip3] torch==2.4.1+cpu [pip3] torch_geometric==2.5.2 [pip3] torch_qaic==0.1.0 [pip3] torch-tb-profiler==0.4.3 [conda] Could not collect cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
true
2,816,210,050
FIRE Relative Positional Encodings
kaddu341
open
[ "module: nn", "triaged" ]
2
NONE
### 🚀 The feature, motivation and pitch Hi, I'm currently working on the length generalization capabilities of transformers. As shown by Zhou et al. (https://arxiv.org/abs/2402.09371), FIRE positional encodings are excellent for this purpose as they can yield generalization results up to 2.5x the input length (in combination with other techniques). FIRE, which stands for Functional Interpolation for Relative Positional Encodings, was introduced by Li et al. (https://arxiv.org/pdf/2310.04418). I am planning to implement the algorithm from this paper myself, but I thought it would be useful if I could turn it into a PyTorch module so that others can benefit too. Therefore, I am proposing to add this feature to the PyTorch library. Please let me know what you think! ### Alternatives There are many other positional encoding types (sinusoidal, RoPE, learned, etc.), but for the specific task of length generalization, FIRE seems to be the most suitable based on several papers, which is why I am proposing this feature addition. ### Additional context Like other relative attention mechanisms, FIRE introduces positional information in the attention layers rather than adding it to the input (so the layer would subclass nn.Module and basically function as a drop-in replacement for MultiHeadAttention). Here is a screenshot of some evaluation results for FIRE from the original paper (Li et al., 2024): <img width="888" alt="Image" src="https://github.com/user-attachments/assets/debfb83d-54e9-4be9-a2d5-6f8c2bf1f473" /> cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
true
2,816,207,018
Skip search for MKL on ARM cpus
malfet
closed
[ "Merged", "Stale", "ciflow/trunk", "topic: not user facing" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145850 It will not find it anyway and makes a bit easier parsing thru CMake log on non-x86 systems
true
2,816,206,849
[BE] Include CheckFunctionExists in `FindBLAS.cmake`
malfet
closed
[ "Merged", "ciflow/trunk", "topic: not user facing" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145850 * __->__ #145849 It's used in the script, so it must be included
true
2,816,179,114
Integrate sympy expression provenance logging with structured logs
bobrenjc93
closed
[ "Merged", "ciflow/trunk", "release notes: fx", "topic: not user facing", "fx", "ciflow/inductor" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #145848 cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
true
2,816,159,985
Guard the CPU cpp wrapper tests on having a cpp wrapper
charlie-wt
open
[ "triaged", "open source", "topic: not user facing", "module: inductor" ]
12
CONTRIBUTOR
Since they're the CPU CPP wrapper tests, they should only run if the CPU backend we're using has a CPP wrapper. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,816,086,780
[async-TP] Fix scheduling in matmul+reduce-scatter for 2 ranks
lw
closed
[ "oncall: distributed", "Merged", "ciflow/trunk", "topic: not user facing" ]
4
CONTRIBUTOR
There's a sleep that is issued in order to "nudge" CUDA to do the right scheduling decision, but this is issued on iteration number 2. However, when the world size is 2, we never reach that iteration, which led to a suboptimal scheduling. cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
true
2,816,074,248
[OSS] Add kwargs to fsspec reader/writer
ankitageorge
closed
[ "oncall: distributed", "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing" ]
8
CONTRIBUTOR
Summary: Add kwargs to fsspec reader/writer. This will be used when reading/writing from huggingface because it needs a token to access the repositories Test Plan: https://fburl.com/anp/agkrlas1 ability to read write to hf with fsspec Differential Revision: D68738777 cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
true
2,816,067,692
Move trunk windows builds to CUDA-12.4
atalman
closed
[ "Merged", "ciflow/trunk", "topic: not user facing" ]
6
CONTRIBUTOR
Same as : https://github.com/pytorch/pytorch/pull/130446 That should catch build regressions that were previously only detectable during the nightly builds for 12.4
true
2,816,065,231
WIP OpenVINOQuantizer
daniil-lyakhov
closed
[ "open source", "release notes: quantization", "release notes: AO frontend" ]
1
CONTRIBUTOR
Fixes #ISSUE_NUMBER
true
2,815,970,732
[MTIA][FSDP2] Enable MTIA device in FSDP2 library code
jvandebon
closed
[ "oncall: distributed", "fb-exported", "Merged", "ciflow/trunk", "release notes: distributed (fsdp)", "ciflow/inductor" ]
13
CONTRIBUTOR
Differential Revision: D68560256 cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
true
2,815,909,429
clamp_min did not work
moghadas76
closed
[ "needs reproduction" ]
2
NONE
### 🐛 Describe the bug ```python import numpy as np import torch file = np.load("tensor.zip") tensor = torch.from_numpy(file.f.arr_0) print(tensor) print(tensor.shape) print(torch.ge(tensor.clamp_min(1e-6), 0.0).all()) print(torch.ge(tensor.abs()+0.01, 0.0).all()) # print(torch.ge(tensor.abs()+0.01, 0.0).all()) ``` return ``` False False ``` Why? [tensor.zip](https://github.com/user-attachments/files/18575204/tensor.zip) ### Versions PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.5.119 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 GPU 1: NVIDIA GeForce RTX 4090 Nvidia driver version: 555.42.02 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: 13th Gen Intel(R) Core(TM) i9-13900F CPU family: 6 Model: 183 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 Stepping: 1 CPU max MHz: 5600.0000 CPU min MHz: 800.0000 BogoMIPS: 3993.60 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 896 KiB (24 instances) L1i cache: 1.3 MiB (24 instances) L2 cache: 32 MiB (12 instances) L3 cache: 36 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-31 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Mitigation; Clear Register File Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] easy-torch==1.3.2 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==8.9.2.26 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] onnx==1.15.0 [pip3] pytorch-forecasting==1.2.0 [pip3] pytorch-lightning==2.2.0 [pip3] torch==2.3.0 [pip3] torch_cluster==1.6.3+pt23cu121 [pip3] torch_geometric==2.4.0 [pip3] torch_scatter==2.1.2+pt23cu121 [pip3] torch_sparse==0.6.18+pt23cu121 [pip3] torch_spline_conv==1.2.2+pt23cu121 [pip3] torch-summary==1.4.5 [pip3] torchaudio==2.3.0 [pip3] torchinfo==1.8.0 [pip3] torchmetrics==1.3.0.post0 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.18.0 [pip3] triton==2.3.0 [conda] blas 1.0 mkl [conda] cuda-cudart 12.1.105 0 nvidia [conda] cuda-cupti 12.1.105 0 nvidia [conda] cuda-libraries 12.1.0 0 nvidia [conda] cuda-nvrtc 12.1.105 0 nvidia [conda] cuda-nvtx 12.1.105 0 nvidia [conda] cuda-opencl 12.3.52 0 nvidia [conda] cuda-runtime 12.1.0 0 nvidia [conda] easy-torch 1.3.2 pypi_0 pypi [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] libcublas 12.1.0.26 0 nvidia [conda] libcufft 11.0.2.4 0 nvidia [conda] libcurand 10.3.4.52 0 nvidia [conda] libcusolver 11.4.4.55 0 nvidia [conda] libcusparse 12.0.2.55 0 nvidia [conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch [conda] libnvjitlink 12.1.105 0 nvidia [conda] mkl 2023.1.0 h213fc3f_46343 [conda] mkl-service 2.4.0 py311h5eee18b_1 [conda] mkl_fft 1.3.8 py311h5eee18b_0 [conda] mkl_random 1.2.4 py311hdb19cb5_0 [conda] numpy 1.24.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi [conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi [conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch [conda] pytorch-forecasting 1.2.0 pypi_0 pypi [conda] pytorch-lightning 2.2.0 pypi_0 pypi [conda] pytorch-mutex 1.0 cuda pytorch [conda] torch 2.3.0 pypi_0 pypi [conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi [conda] torch-geometric 2.4.0 pypi_0 pypi [conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi [conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi [conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi [conda] torch-summary 1.4.5 pypi_0 pypi [conda] torchaudio 2.3.0 pypi_0 pypi [conda] torchinfo 1.8.0 pypi_0 pypi [conda] torchmetrics 1.3.0.post0 pypi_0 pypi [conda] torchsummary 1.5.1 pypi_0 pypi [conda] torchvision 0.18.0 pypi_0 pypi [conda] triton 2.3.0 pypi_0 pypi
true
2,815,877,292
s390x ci: ensure CI starts correctly if token pipe is not removed
AlekseiNikiforovIBM
closed
[ "triaged", "open source", "Merged", "topic: not user facing" ]
3
COLLABORATOR
Mark stop actions as "may fail". Container is expected to stop on it's own in normal case. Remove "may fail" mark from token generation steps.
true
2,815,772,955
[ATen] Implement exception handling for hipsolver APIs
danzimm
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing" ]
5
CONTRIBUTOR
Summary: TSA Test Plan: CI Differential Revision: D68741194
true
2,815,706,878
[typing] Not all operators correctly infer Tensor result.
randolf-scholz
closed
[ "module: typing", "triaged", "better-engineering", "needs design" ]
2
CONTRIBUTOR
### 🐛 Describe the bug <details> <summary> Static Typing Test suite </summary> ```python # mypy: enable-error-code="unused-ignore" import operator import torch from torch import Tensor from typing_extensions import assert_type, reveal_type x = torch.randn(3) assert_type(x, Tensor) i64: int = 2 f64: float = 3.14 # op(Tensor, Tensor) assert_type(x + x, Tensor) assert_type(x - x, Tensor) assert_type(x * x, Tensor) assert_type(x / x, Tensor) assert_type(x % x, Tensor) assert_type(x // x, Tensor) # type: ignore[assert-type] assert_type(x**x, Tensor) # type: ignore[assert-type] # comparisons assert_type(x < x, Tensor) assert_type(x > x, Tensor) assert_type(x <= x, Tensor) assert_type(x >= x, Tensor) assert_type(x == x, Tensor) assert_type(x != x, Tensor) # op(Tensor, int) assert_type(x + i64, Tensor) assert_type(x - i64, Tensor) assert_type(x * i64, Tensor) assert_type(x / i64, Tensor) assert_type(x % i64, Tensor) assert_type(x // i64, Tensor) # type: ignore[assert-type] assert_type(x**i64, Tensor) # type: ignore[assert-type] assert_type(x < i64, Tensor) assert_type(x > i64, Tensor) assert_type(x <= i64, Tensor) assert_type(x >= i64, Tensor) assert_type(x == i64, Tensor) assert_type(x != i64, Tensor) # op(Tensor, float) assert_type(x + f64, Tensor) assert_type(x - f64, Tensor) assert_type(x * f64, Tensor) assert_type(x / f64, Tensor) assert_type(x % f64, Tensor) assert_type(x // f64, Tensor) # type: ignore[assert-type] assert_type(x**f64, Tensor) # type: ignore[assert-type] assert_type(x < f64, Tensor) assert_type(x > f64, Tensor) assert_type(x <= f64, Tensor) assert_type(x >= f64, Tensor) assert_type(x == f64, Tensor) assert_type(x != f64, Tensor) # op(int, Tensor) assert_type(i64 + x, Tensor) assert_type(i64 - x, Tensor) # type: ignore[assert-type] assert_type(i64 * x, Tensor) assert_type(i64 / x, Tensor) # type: ignore[assert-type] assert_type(i64 % x, Tensor) # type: ignore[assert-type] assert_type(i64 // x, Tensor) # type: ignore[assert-type] assert_type(i64**x, Tensor) # type: ignore[assert-type] assert_type(i64 < x, Tensor) assert_type(i64 > x, Tensor) assert_type(i64 <= x, Tensor) assert_type(i64 >= x, Tensor) assert_type(i64 == x, Tensor) # type: ignore[assert-type] assert_type(i64 != x, Tensor) # type: ignore[assert-type] # op(float, Tensor) assert_type(f64 + x, Tensor) assert_type(f64 - x, Tensor) # type: ignore[assert-type] assert_type(f64 * x, Tensor) assert_type(f64 / x, Tensor) # type: ignore[assert-type] assert_type(f64 % x, Tensor) # type: ignore[assert-type] assert_type(f64 // x, Tensor) # type: ignore[assert-type] assert_type(f64**x, Tensor) # type: ignore[assert-type] assert_type(f64 < x, Tensor) assert_type(f64 > x, Tensor) assert_type(f64 <= x, Tensor) assert_type(f64 >= x, Tensor) assert_type(f64 == x, Tensor) # type: ignore[assert-type] assert_type(f64 != x, Tensor) # type: ignore[assert-type] OPS = [ operator.add, # + operator.sub, # - operator.mul, # * operator.truediv, # / operator.mod, # % operator.floordiv, # // operator.pow, # ** operator.le, # < operator.gt, # > operator.lt, # <= operator.ge, # >= operator.eq, # == operator.ne, # != ] for rhs in [x, i64, f64]: for op in OPS: assert isinstance(op(x, rhs), Tensor) for lhs in [x, i64, f64]: for op in OPS: assert isinstance(op(lhs, x), Tensor) ``` </details> ## Results Arithmetic, should be fixable by typing `_handle_torch_function_and_wrap_type_error_to_not_implemented` decorator used in `torch/_tensor.py`. - [ ] `Tensor // Tensor` inferred as `Any`, not `Tensor` - [ ] `Tensor ** Tensor` inferred as `Any`, not `Tensor` - [ ] `Tensor // Number` inferred as `Any`, not `Tensor` - [ ] `Tensor ** Number` inferred as `Any`, not `Tensor` - [ ] `Number - Tensor` inferred as `Any`, not `Tensor` - [ ] `Number / Tensor` inferred as `Any`, not `Tensor` - [ ] `Number % Tensor` inferred as `Any`, not `Tensor` - [ ] `Number // Tensor` inferred as `Any`, not `Tensor` - [ ] `Number ** Tensor` inferred as `Any`, not `Tensor` Comparisons, possibly unfixable currently - [ ] `Number == Tensor` inferred as `bool`, not `Tensor` - [ ] `Number != Tensor` inferred as `bool`, not `Tensor` ### Versions Tested with both 2.5.1 and 2.7.0.dev20250128+cu124 nightly cc @ezyang @malfet @xuzhao9 @gramster
true
2,815,599,877
torch.nested.narrow() or torch.nested.to_padded_tensor() breaks backwards pass - invalid gradient
kkj15dk
open
[ "triaged", "module: nestedtensor" ]
4
NONE
### 🐛 Describe the bug I am converting a jagged nested tensor to a padded tensor, then adding Rotary positional embeddings, then converting it back to a jagged tensor. The forward pass is just fine, but the backwards pass breaks. It is probably because the offsets object changes throughout the forward pass, but I cannot see how to fix this. If there was a function to convert padded tensor -> jagged tensor, while preserving the original offsets object of the jagged tensor before padding, I think that would be a workaround. Some code to showcase the bug: ```python import torch import torch.nn as nn def padded_from_jagged(tensor, pad_value=0.0): offsets = tensor.offsets() padded = torch.nested.to_padded_tensor(tensor, padding=pad_value) return padded, offsets def jagged_from_padded(tensor, offsets, contiguous=True): seq_lens = offsets.diff() jagged = torch.nested.narrow(tensor, dim=1, start=0, length=seq_lens, layout=torch.jagged) if contiguous: jagged = jagged.contiguous() return jagged class test_model(nn.Module): def __init__(self, dim, max_len): super().__init__() self.pos_emb = nn.Parameter(torch.randn(1, max_len, dim), requires_grad=True) def forward(self, x): # x is a ragged tensor (batch_size=4, j, dim=64), c is a regular tensor (batch_size=4, dim=64) for i in range(10): x_padded, offsets = padded_from_jagged(x) x_padded = x_padded * self.pos_emb x = jagged_from_padded(x_padded, offsets, contiguous=True) return x device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') batch_size = 4 max_len = 4096 dim = 64 model = test_model(dim, max_len).to(device) batch = torch.nested.nested_tensor([torch.randn(max_len - i, dim) for i in range(batch_size)], device=device, layout=torch.jagged) # batch_size=4, j=jagged, dim=64 output = model(batch) loss = output.mean() loss.backward() ``` Error message: ``` Traceback (most recent call last): File "/home/kkj/axolotl/playground/padded_bug.py", line 43, in <module> loss.backward() File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward torch.autograd.backward( File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward _engine_run_backward( File "/home/kkj/axolotl/.venv/lib/python3.10/site-packages/torch/autograd/graph.py", line 814, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Function CloneBackward0 returned an invalid gradient at index 0 - got [4, j21, 64] but expected shape compatible with [4, j20, 64] ``` ### Versions --2025-01-28 13:51:32-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.108.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 24353 (24K) [text/plain] Saving to: ‘collect_env.py’ collect_env.py 100%[======================================================================================================================================>] 23.78K --.-KB/s in 0s 2025-01-28 13:51:32 (88.7 MB/s) - ‘collect_env.py’ saved [24353/24353] Collecting environment information... PyTorch version: 2.7.0.dev20250122+cu126 Is debug build: False CUDA used to build PyTorch: 12.6 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35 Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4050 Laptop GPU Nvidia driver version: 561.19 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: GenuineIntel Model name: 13th Gen Intel(R) Core(TM) i5-13500H CPU family: 6 Model: 186 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 2 BogoMIPS: 6374.39 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities Virtualization: VT-x Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 384 KiB (8 instances) L1i cache: 256 KiB (8 instances) L2 cache: 10 MiB (8 instances) L3 cache: 18 MiB (1 instance) Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Mitigation; Clear Register File Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==2.1.2 [pip3] nvidia-cublas-cu12==12.6.4.1 [pip3] nvidia-cuda-cupti-cu12==12.6.80 [pip3] nvidia-cuda-nvrtc-cu12==12.6.77 [pip3] nvidia-cuda-runtime-cu12==12.6.77 [pip3] nvidia-cudnn-cu12==9.5.1.17 [pip3] nvidia-cufft-cu12==11.3.0.4 [pip3] nvidia-curand-cu12==10.3.7.77 [pip3] nvidia-cusolver-cu12==11.7.1.2 [pip3] nvidia-cusparse-cu12==12.5.4.2 [pip3] nvidia-cusparselt-cu12==0.6.3 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.6.85 [pip3] nvidia-nvtx-cu12==12.6.77 [pip3] pytorch-triton==3.2.0+git0d4682f0 [pip3] torch==2.7.0.dev20250122+cu126 [pip3] torchaudio==2.6.0.dev20250122+cu126 [pip3] torchvision==0.22.0.dev20250122+cu126 [conda] Could not collect cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
true
2,815,574,245
DISABLED test_op_dtype_propagation_bitwise_xor_cuda_int64 (__main__.TestCaseCUDA)
pytorch-bot[bot]
closed
[ "module: rocm", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
2
NONE
Platforms: rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_op_dtype_propagation_bitwise_xor_cuda_int64&suite=TestCaseCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36265413881). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_op_dtype_propagation_bitwise_xor_cuda_int64` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper return test(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/var/lib/jenkins/pytorch/test/inductor/test_op_dtype_prop.py", line 81, in test_op_dtype_propagation self.assertEqual(out, out_c) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4042, in assertEqual raise error_metas.pop()[0].to_error( # type: ignore[index] AssertionError: Scalars are not equal! Expected 7 but got 7. Absolute difference: 0 Relative difference: 0.0 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1620, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper raise e_tracked from e Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cuda:0", dtype=torch.int64], args=TensorList[Tensor[size=(), device="cuda:0", dtype=torch.int64]], kwargs={}, broadcasts_input=False, name='') To execute this test, run the following from the base repo dir: PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_op_dtype_prop.py TestCaseCUDA.test_op_dtype_propagation_bitwise_xor_cuda_int64 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `inductor/test_op_dtype_prop.py` cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,815,512,350
s390x: disable test_model_exports_to_core_aten.py test
AlekseiNikiforovIBM
closed
[ "triaged", "open source", "Merged", "topic: not user facing", "ciflow/s390" ]
3
COLLABORATOR
It often gets killed by OOM. Disable it while investigating.
true
2,815,457,152
[inductor] Add features to docstring_linter (see #142496)
rec
closed
[ "module: lint", "open source", "better-engineering", "Merged", "ciflow/trunk", "topic: not user facing", "suppress-api-compatibility-check", "suppress-bc-linter" ]
11
COLLABORATOR
## Improvements to `docstring_linter` * Add a "grandfather list" of existing undocumented classes and functions (`--grandfather`, `--grandfather-tolerance`, `--no-grandfather`, `--write-grandfather`) * In classes, now just one of the class itself or its `__init__()` method needs to be documented (`--lint-init` turns the old behavior back on) * Now classes and functions defined local to other functions do not need to be documented (`--lint-local` turns the old behavior back on) * New `--report` flag produces a compact report of long, undocumented classes or function definitions: see attached example run over all pytorch: [pytorch-docs.json](https://github.com/user-attachments/files/18455981/pytorch-docs.json) ## Help text ``` $ python tools/linter/adapters/docstring_linter.py --help usage: docstring_linter.py [-h] [-l] [-v] [--grandfather GRANDFATHER] [--grandfather-tolerance GRANDFATHER_TOLERANCE] [--lint-init] [--lint-local] [--lint-protected] [--max-class MAX_CLASS] [--max-def MAX_DEF] [--min-docstring MIN_DOCSTRING] [--no-grandfather] [--report] [--write-grandfather] [files ...] `docstring_linter` reports on long functions, methods or classes without docstrings positional arguments: files A list of files or directories to lint optional arguments: -h, --help show this help message and exit -l, --lintrunner Run for lintrunner and print LintMessages which aren't edits -v, --verbose Print more debug info --grandfather GRANDFATHER, -g GRANDFATHER Set the grandfather list --grandfather-tolerance GRANDFATHER_TOLERANCE, -t GRANDFATHER_TOLERANCE Tolerance for grandfather sizes, in percent --lint-init, -i Lint __init__ and class separately --lint-local, -o Lint definitions inside other functions --lint-protected, -p Lint functions, methods and classes that start with _ --max-class MAX_CLASS, -c MAX_CLASS Maximum number of lines for an undocumented class --max-def MAX_DEF, -d MAX_DEF Maximum number of lines for an undocumented function --min-docstring MIN_DOCSTRING, -s MIN_DOCSTRING Minimum number of characters for a docstring --no-grandfather, -n Disable the grandfather list --report, -r Print a report on all classes and defs --write-grandfather, -w Rewrite the grandfather list ``` --- Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #148959 * #144622 * #144621 * __->__ #145834
true
2,815,247,667
Implement KL for studentT
moghadas76
closed
[ "triaged", "open source", "Stale" ]
3
NONE
Fixes #145729
true
2,815,213,433
[DO NOT MERGE] [TESTING] [ROCm] Triton cherry-picks for AMD backend perf optimisation
jataylo
closed
[ "module: rocm", "open source", "ciflow/trunk", "topic: not user facing", "ciflow/periodic", "ciflow/inductor", "ciflow/rocm", "ciflow/inductor-rocm" ]
7
COLLABORATOR
Testing for rc/3.2.x PR cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd
true
2,815,123,002
Update ET pin to 41e7ffa
GregoryComer
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "ciflow/inductor" ]
9
MEMBER
ExecuTorch pin is failing to update due to a change in the executorch install scripts. The previous install_requirements.sh now only installs dependencies and does not build ET. There is a new script - install_executorch.sh, which both installs dependencies and builds the framework. This PR updates the relevant CI logic to use install_executorch.sh and bumps the pin forward. This should fix the stuck ET pin.
true
2,815,109,705
DISABLED test_cat_max_autotune_triton (__main__.TestMaxAutotune)
pytorch-bot[bot]
closed
[ "module: rocm", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
8
NONE
Platforms: rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cat_max_autotune_triton&suite=TestMaxAutotune&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36261282178). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 14 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_cat_max_autotune_triton` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 825, in test_cat_max_autotune_triton self._test_cat_max_autotune_impl(using_triton_mm=True) File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 809, in _test_cat_max_autotune_impl FileCheck().check("call(").check_count(".run", 2, exactly=True).run(code[0]) RuntimeError: Expected to not find ".run" but found it # Topologically Sorted Source Nodes: [add], Original ATen: [aten.add] stream0 = get_raw_stream(0) triton_poi_fused_add_2.run(buf0, buf3, 1024, grid=grid(1024), stream=stream0) ~~~~ <--- HERE return (buf2, buf3, ) From CHECK-NOT: .run To execute this test, run the following from the base repo dir: PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_max_autotune.py TestMaxAutotune.test_cat_max_autotune_triton This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `inductor/test_max_autotune.py` cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
true
2,815,054,139
Ensure GPU isolation for kubernetes pod MI300 runners.
saienduri
closed
[ "module: rocm", "open source", "Merged", "topic: not user facing", "rocm", "ciflow/unstable", "ciflow/rocm" ]
4
CONTRIBUTOR
Fixes the reason behind moving the tests to unstable initially. (https://github.com/pytorch/pytorch/pull/145790) We ensure gpu isolation for each pod within kubernetes by propagating the drivers selected for the pod from the Kubernetes layer up to the docker run in pytorch here. Now we stick with the GPUs assigned to the pod in the first place and there is no overlap between the test runners. cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
true
2,814,806,838
[dynamo][builtin-skipfiles-cleanup] Remove posixpath
anijain2305
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #145559 * #145804 * __->__ #145828 * #145826 * #145753 * #145744 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
2,814,801,658
What changed in torch 2.5.0 that cause photomaker to fail?
CaledoniaProject
open
[ "needs reproduction", "triaged" ]
3
NONE
### 🐛 Describe the bug Hi there, I came from this issue https://github.com/TencentARC/PhotoMaker/issues/205. I'm playing around with PhotoMaker, and I found their IP adapter failed to work with latest torch (2.5.X). But when I downgrade the pytorch version to 2.4.1 it worked. So I suspect something changed fundamentally in version 2.5.0. How should I start investigate this issue? Thanks! ### Versions Latest
true