id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2 values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4 values | body stringlengths 7 62.5k ⌀ | is_title bool 1 class |
|---|---|---|---|---|---|---|---|---|
2,822,370,384 | Enable ruff F841 on numpy tests | cyyever | closed | [
"module: tests",
"module: numpy",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @mruberry @ZainRizvi @rgommers | true |
2,822,369,004 | DISABLED test_ord (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_ord&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36448427013).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_ord`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 13571, in test_ord
self.checkScript(fn, ("h"))
~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_ord
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,368,984 | DISABLED test_module_with_params_called_fails (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_module_with_params_called_fails&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36448427013).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_module_with_params_called_fails`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 12560, in test_module_with_params_called_fails
def test_module_with_params_called_fails(self):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 12561, in torch_dynamo_resume_in_test_module_with_params_called_fails_at_12561
with self.assertRaisesRegex(RuntimeError, "Cannot call a ScriptModule that is not a submodule of the caller"):
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 276, in __exit__
self._raiseFailure('"{}" does not match "{}"'.format(
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected_regex.pattern, str(exc_value)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 200, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: "Cannot call a ScriptModule that is not a submodule of the caller" does not match "RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
"
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_module_with_params_called_fails
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,368,934 | DISABLED test_script_star_assign (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_star_assign&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36449013532).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_star_assign`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9625, in test_script_star_assign
m = M2()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 321, in init_then_script
] = torch.jit._recursive.create_script_module(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self, make_stubs, share_types=not added_methods_in_init
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_recursive.py", line 555, in create_script_module
AttributeTypeIsSupportedChecker().check(nn_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_check.py", line 62, in check
source_lines = inspect.getsource(nn_module.__class__.__init__)
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_star_assign
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,368,898 | DISABLED test_script_bool_constant (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_bool_constant&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36448829087).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_bool_constant`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 6614, in test_script_bool_constant
self.checkScript(test_script_bool_constant, [])
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_bool_constant
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,822,365,968 | Adding the best autotuner config | Mingming-Ding | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Summary: Adding logs to log the best config for autotune configs
Test Plan:
Testing in Mast : aps-omnifmv1-5_32_test_with_best_config-c5e9ceccf8
{F1974838864}
Reviewed By: oulgen
Differential Revision: D68931164
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,822,344,016 | Use std::string_view in tests | cyyever | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,822,333,182 | File based multi gpu | cbe135 | open | [
"oncall: distributed",
"triaged"
] | 4 | NONE | ### 🐛 Describe the bug
While trying to implement file-based distributed training with either local file or shared file with
dist.init_process_group(backend="nccl", init_method="file:///..." or "file://////..."
, world_size=num_gpus, rank=world_rank)
I get socketStartConnect: Connect to ... failed : Software caused connection abort
It seems file-based distributed training doesn't work just with the shared file but also needs to connect to an IP socket?
Following are the exact error messages got. Thank you.
<img width="619" alt="Image" src="https://github.com/user-attachments/assets/8c31abe8-05c8-4d8d-b238-d526dd37558b" />
<img width="835" alt="Image" src="https://github.com/user-attachments/assets/852532a7-946c-45f5-a19e-37764cbbda3b" />
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.39
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L40S
GPU 1: NVIDIA L40S
GPU 2: NVIDIA L40S
GPU 3: NVIDIA L40S
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7313P 16-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 89%
CPU max MHz: 3729.4919
CPU min MHz: 1500.0000
BogoMIPS: 5989.36
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] fft-conv-pytorch==1.2.0
[pip3] geotorch==0.3.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] pytorch-ignite==0.5.0.post2
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torcheval==0.0.7
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] fft-conv-pytorch 1.2.0 pypi_0 pypi
[conda] geotorch 0.3.0 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] pytorch-ignite 0.5.0.post2 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torcheval 0.0.7 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,822,333,001 | [NCCL] Segfault in topology path selection when calculating max bandwidth on nightly | cora-codes | open | [
"oncall: distributed",
"triaged",
"module: nccl"
] | 4 | NONE | ### 🐛 Describe the bug
This issue has came up repeatedly on nightly and I cannot figure out how to fix it.
```python
import torch
torch.distributed.init_process_group(backend="nccl", device_id=torch.device(0), rank=0, world_size=1)
torch.distributed.barrier()
```
Then I run this trivial reproduction:
```
MASTER_ADDR="127.0.0.1" MASTER_PORT="8800" gdb --args python repro.py
```
I saw it was segfaulting on `graph/topo.cc:785`:
```cpp
if (paths[i].bw > maxBw || (paths[i].bw == maxBw && paths[i].type < minType)) {
```
Listing the code we see:
```cpp
780 float maxBw = 0;
781 int count = 0;
782 NCCLCHECK(ncclCalloc(locals, system->nodes[resultType].count));
783 struct ncclTopoLinkList* paths = system->nodes[type].nodes[index].paths[resultType];
784 for (int i=0; i<system->nodes[resultType].count; i++) {
785 if (paths[i].bw > maxBw || (paths[i].bw == maxBw && paths[i].type < minType)) {
786 maxBw = paths[i].bw;
787 minType = paths[i].type;
788 if (pathType) *pathType = minType;
789 count = 0;
```
If we print out `paths`, we see it is nullptr.
```
(gdb) print paths@1
$4 = {0x0}
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitbf9d053
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 6.5 MiB (208 instances)
L1i cache: 6.5 MiB (208 instances)
L2 cache: 416 MiB (104 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-103
NUMA node1 CPU(s): 104-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==4.0.1
[pip3] numpy==1.21.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+gitb2684bf3
[pip3] torch==2.7.0a0+gitbf9d053
[pip3] torchaudio==2.6.0.dev20250130+cu126
[pip3] torchvision==0.22.0.dev20250130+cu126
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,822,307,068 | [dynamo] log recompile reason to dynamo_compile | xmfan | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146117
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,822,206,599 | [dynamo][builtin-skipfiles-cleanup] Remove inspect | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146339
* __->__ #146116
* #146322
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,822,204,826 | move and fix logic to update unbacked bindings | avikchaudhuri | open | [
"fb-exported",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export",
"ci-no-td"
] | 22 | CONTRIBUTOR | Summary:
Previously we were touching up unbacked bindings between Dynamo and AOTAutograd in strict export, but the logic had a bug: if an unbacked symint gets substituted by a backed symint, we would put the backed symint in the unbacked bindings (the check `is_symbol` was not enough here).
This PR fixes this logic, and moreover, moves it into the serializer instead, because we don't need this adjustment outside serde.
Test Plan: added test
D68880766
| true |
2,822,175,157 | [fsdp2] mixed precision missing `buffer_dtype` | leonardo0lyj | closed | [] | 3 | NONE | Hi Andrew @awgu 😊,
As a big fan of FSDP2, I found an potential missing feature in its mixed precision, when compared with your FSDP1:
Recall the mixed precision of FSDP1 has the [`buffer_dtype`](https://github.com/pytorch/pytorch/blob/e6704a2447a04349e6b021817a2bf2f601215e67/torch/distributed/fsdp/api.py#L124):
```python
@dataclass
class MixedPrecision:
"""
...
buffer_dtype (Optional[torch.dtype]): This specifies the dtype for
buffers. FSDP does not shard buffers. Rather, FSDP casts them to
``buffer_dtype`` in the first forward pass and keeps them in that
dtype thereafter. For model checkpointing, the buffers are saved
in full precision except for ``LOCAL_STATE_DICT``. (Default:
``None``)
"""
...
buffer_dtype: Optional[torch.dtype] = None
```
However, in FSDP2, we no longer have such [`buffer_dtype`](https://github.com/pytorch/pytorch/blob/e6704a2447a04349e6b021817a2bf2f601215e67/torch/distributed/fsdp/_fully_shard/_fsdp_api.py#L9):
```python
class MixedPrecisionPolicy:
param_dtype: Optional[torch.dtype] = None
reduce_dtype: Optional[torch.dtype] = None
output_dtype: Optional[torch.dtype] = None
cast_forward_inputs: bool = True
```
In FSDP2, buffers are treated as ignored tensors and are not sharded, thus staying in the initialized precision during forward and backward, which can violate the semantics of mixed precision. E.g.,
- During initialization, a model's parameters and buffers are in `float32` and then `fully_shard`ed with `MixedPrecisionPolicy(param_dtype=bfloat16, cast_forward_inputs=True)`
- During forward, we expect this model runs in `bfloat16` for each intermediate tensors (including those tensors within a submodule)
- Indeed, parameters are unsharded in `param_dtype=bfloat16` and input to submodules are casted to `bfloat16`
- However, buffers stays in `float32` and joins forward compute with parameters and inputs (e.g., `intermediate = parameters * input + buffers`)
- Then, due to the dtype promotion, intermediate tensor (`intermediate`) will be in `float32`, so does all following intermediate tensors in this submodule (before next submodule's `cast_forward_inputs`).
- So mixed precision ends up in full precision (`float32`) compute within each submodule
How should we solve the `buffer`? Bringing back FSDP1's `buffer_dtype` or there is more elegant design?
Looking forward 😄
| true |
2,822,175,032 | Fix logging and test files which misspell "precision" | danielvegamyhre | closed | [
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend"
] | 18 | CONTRIBUTOR | Noticed this while working on something, decided to submit a quick fix. | true |
2,822,167,453 | [export] Fix symfloat serialization | angelayi | closed | [
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
| true |
2,822,140,811 | [scan] Support lowering and lifted arguments for inductor | bohnstingl | open | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 1 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
Currently we support lifted arguments for `scan` only in dynamo and there is also no lowering support. This issue is a follow-up on https://github.com/pytorch/pytorch/pull/146110
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225
### Alternatives
_No response_
### Additional context
_No response_ | true |
2,822,140,618 | [scan] Corrections for scan | bohnstingl | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 9 | COLLABORATOR | This PR resolves some minor issues with the scan HOP and unifies the handling of the additional_inputs in the same way as for associative_scan.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ydwu4 | true |
2,822,122,750 | cpp_wrapper: fix inductor triton tests | benjaminglass1 | closed | [
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147225
* #146706
* #147403
* #146991
* #147215
* #146424
* __->__ #146109
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,822,111,187 | [associative_scan] Support lifted arguments for inductor | bohnstingl | open | [
"triaged",
"oncall: pt2"
] | 1 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
Currently we support lifted arguments for `associative_scan` only in dynamo, but not in inductor. This issue is a follow-up on https://github.com/pytorch/pytorch/pull/140043
cc @chauhang @penguinwu @ydwu4
### Alternatives
_No response_
### Additional context
_No response_ | true |
2,822,110,655 | [export] Include metadata in FlatArgsAdapter | angelayi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 7 | CONTRIBUTOR | Summary:
With https://github.com/pytorch/pytorch/pull/145956, which introduces
storing a list of namedtuple field names when serializing, we now want to
expose this list to the args adapater so that APS can utilize this information
and remove extraneous inputs.
Test Plan: No-op
Differential Revision: D68928416
| true |
2,822,094,265 | [export] Fix draft-export logging | angelayi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 7 | CONTRIBUTOR | Summary: Fix issue where the lazyTraceHandler does not exist
Test Plan: CI
Differential Revision: D68928070
| true |
2,822,088,039 | [CMake] Delete Caffe2 inspect_gpu binary | malfet | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | As it's unbuildable right now, as headers it depends on are gone
Fixes https://github.com/pytorch/pytorch/issues/146042
| true |
2,822,085,937 | [DO NOT MERGE] Testing C2 MI300 cluster. | saienduri | closed | [
"module: rocm",
"open source",
"Stale",
"topic: not user facing",
"ciflow/unstable"
] | 2 | CONTRIBUTOR | This PR is to test the stability of the C2 MI300x cluster.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,822,066,290 | add node mapping processing | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 14 | CONTRIBUTOR | Summary:
Add `node_mapping = create_node_mapping(pre_grad_graph_id, inductor_post_to_pre_grad_nodes, debug_info)`, to produce a `inductor_provenance_tracking_node_mappings.json` file. This file will be used by the provenance tracking highlighter tool to create provenance visualization.
`inductor_triton_kernel_to_post_grad_nodes.json` and `inductor_provenance_tracking_node_mappings.json` files are not dumped if they are both empty. So it's removed from some of the `test_structured_trace` tests.
Test Plan:
CI
```
buck run mode/dev-nosan fbcode//caffe2/test:fx -- -r graph_provenance
buck run mode/dev-nosan fbcode//caffe2/test/inductor:provenance_tracing
python test/dynamo/test_structured_trace.py
```
Differential Revision: D68190173
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,822,045,660 | Use OrderedSet in _functorch/partitioners | masnesral | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146102
In an attempt to make partitioning more deterministic, change all sets in partitioners.py to OrderedSets. Note that this change does not fix the non-determinism we're seeing in the internal model. But let's at least eliminate this potential source of non-determinism before investigating any changes to the mincut approach? | true |
2,822,038,276 | (WIP) Update NJT ops to check data for raggedness check | soulitzer | open | [
"release notes: nested tensor",
"module: dynamo",
"ciflow/inductor",
"no-stale"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146172
* __->__ #146101
* #145922
* #141842
* #141841
* #146052
Some Issues:
1. The way we use ephemeral sources doesn't work well with this case https://github.com/pytorch/pytorch/pull/145957#issuecomment-2632338330 so the backward function needs to somehow stash the intermediates somewhere
2. There are some duplicate runtime asserts created, e.g. if I do a binary op requiring runtime assert during forward assert(j0 == j1), (1) I might end up doing that same j0 == j1 assert later during forward. (2) I will also almost always do the same j0 == j1 when I compute gradients. I wonder how severe the slow down is. If it is concerning, for compile, maybe we can dedupe them with some graph pass? For eager, could we have some kind of eager-only union find?
3. Previously whether j0 == j1 succeeds is consistent with whether nt * nt2, succeeds. Now that we allow nt * nt2 to succed, we are no longer consistent. One way forward is to just document this clearly.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,822,036,390 | [AOTI] Fix a memory leak in package boxed_run | desertfire | closed | [
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146100
Summary: AOTIModelPackageLoaderPybind::boxed_run missed a decref when constructing the returned py::list.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
2,822,022,839 | [CI][Distributed] Fix edge case: One rank case (Rank 0) should get [False, False] | nWEIdia | closed | [
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | COLLABORATOR | To match the expected tensor (i.e. 2nd element in the array). Making rank0 receive [False, False]
Fixes one of the issues reported in #146094
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @seemethere @malfet @eqy @ptrblck @tinglvv | true |
2,822,017,260 | Move get accelerator to use build time flags when possible | albanD | closed | [
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: python_frontend",
"topic: bug fixes",
"ciflow/mps",
"ciflow/xpu",
"ci-no-td"
] | 26 | COLLABORATOR | This PR does two main things (they are in a single PR to show how the newly added APIs are used).
- Add isBuilt and isAvailable APIs to the AcceleratorHook interface. See inline doc for their exact semantic
- Use the newly added isBuilt for accelerator check to ensure it does not poison fork
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @egienvalue we should do an MTIA patch for this and move to compile-time check once we figure out the CUDA+MTIA binary situation
cc @guangyey we would need to add these APIs to the HPU backend (which I don't have access to) and we can move it to be compile time as well to avoid initialization. | true |
2,822,000,718 | [ONNX] Bump onnx and onnxscript versions in CI | justinchuby | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14 | COLLABORATOR | Bump onnx onnxscript==0.1 in CI; Skipped onnxruntime 1.19 because it has regression on avgpool. | true |
2,821,985,374 | `torch.nn.functional.conv2d` 8 times slower in torch 2.5.1 compared to 2.3.1 | navidsam | open | [
"module: cudnn",
"triaged"
] | 7 | NONE | ### 🐛 Describe the bug
Hey everyone, I am noticing that in torch 2.5.1 when I run the following snippet the exact same operation takes around 8 times longer compared to torch 2.3.1 when everything else in my environment remains the same:
```
import torch
import torch.nn.functional as F
import time
print(torch.__version__)
# prints the following
# 2.5.1+cu124
print("torch.backends.cudnn.deterministic = ", torch.backends.cudnn.deterministic)
print("torch.backends.cuda.matmul.allow_tf32 = ", torch.backends.cuda.matmul.allow_tf32)
print("torch.backends.cudnn.allow_tf32 = ", torch.backends.cudnn.allow_tf32)
print("torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = ", torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction)
# prints the following
# torch.backends.cudnn.deterministic = False
# torch.backends.cuda.matmul.allow_tf32 = False
# torch.backends.cudnn.allow_tf32 = True
# torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = True
torch.cuda.synchronize()
myconv_start = time.time()
with torch.no_grad():
out = F.conv2d(torch.rand(75, 1, 572, 572).cuda(), torch.rand(1, 1, 51, 51).cuda(), padding=0).cpu()
torch.cuda.synchronize()
myconv_time = time.time() - myconv_start
print(f"myconv_time = {myconv_time:0.2f} seconds")
out.shape
# prints the following
# myconv_time = 4.89 seconds
# torch.Size([75, 1, 522, 522])
```
My CUDA is as follows: Driver Version: 550.90.07, CUDA: 12.4 and I am using a L4 GPU.
The exact same code runs in 0.65 second on average when I run it in torch 2.3.1+cu121 version in the same exact environment.
### Versions
**`torch 2.5.1 environment`:**
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.066
BogoMIPS: 4400.13
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 4 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
**`torch 2.3.1` environment:**
```
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.31
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 7
CPU MHz: 2200.066
BogoMIPS: 4400.13
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 4 MiB
L3 cache: 38.5 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.3.1
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
[conda] numpy 2.2.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.61 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-lightning 2.5.0.post0 pypi_0 pypi
[conda] torch 2.3.1 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.18.1 pypi_0 pypi
[conda] triton 2.3.1 pypi_0 pypi
```
cc @csarofeen @ptrblck @xwang233 @eqy | true |
2,821,982,619 | [ONNX] Migrate test_torch_export_with_onnxruntime.py to test_small_models_e2e.py | titaiwangms | closed | [
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 3 | COLLABORATOR | With [the deprecation of torch.onnx.dynamo_export](https://github.com/pytorch/pytorch/pull/146003), this PR turns the torch.export related tests toward torch.onn.export(..., dynamo=True), and places it in test_small_models_e2e.py
NOTE: test_exported_program_as_input_from_file and test_onnx_program_supports_retraced_graph are not kept, because they are more of testing whether exported program stays the same after save/load and retrace. However, in torch.onnx.export(..., dynamo=True), we focus more on the export of from nn.Module to ONNX proto. | true |
2,821,955,613 | [CI][Distributed] @skip_if_lt_x_gpu(2) seems to be broken | nWEIdia | open | [
"oncall: distributed",
"module: ci",
"module: tests"
] | 2 | COLLABORATOR | ### 🐛 Describe the bug
We have seen in several cases that the "@skip_if_lt_x_gpu(2)" does not automatically skip the unit test if/when the test is running on a single GPU.
This was one of the reasons PRs like https://github.com/pytorch/pytorch/pull/145195 was needed to make sure the tests got skipped on platforms with 1GPU.
i.e. the skip_if_lt_x_gpu(2) seems to have some sort of dependency on world size: e.g. in the src code before merging https://github.com/pytorch/pytorch/pull/145195, the tests would be run on one GPU even if the world_size was set to 2.
As another example:
touch /tmp/barrier && TEMP_DIR=/tmp BACKEND='nccl' WORLD_SIZE=1 python test/distributed/test_distributed_spawn.py TestDistBackendWithSpawn.test_nccl_backend_bool_reduce
also runs and fails on a platform with 1GPU, where as the test_nccl_backend_bool_reduce has this decorator here.
https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/distributed/distributed_test.py#L6425
So is the "skip_if_lt_x_gpu(2)" dependent on world_size or the number of GPUs?
### Versions
main TOT as of 01/30/2025
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @ZainRizvi @eqy @ptrblck @tinglvv | true |
2,821,955,375 | TEST3 | ZainRizvi | closed | [
"oncall: distributed",
"release notes: releng",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,821,954,225 | TEST2 | ZainRizvi | closed | [
"oncall: distributed",
"release notes: releng",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,821,953,105 | [WIP] TEST 1 | ZainRizvi | closed | [
"oncall: distributed",
"release notes: releng",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,821,953,053 | [ARM] Fix TestDataLoader.test_segfault unexpected success on Aarch6[4 | robert-hardwick | open | [
"triaged",
"open source",
"module: arm",
"Stale",
"ciflow/trunk",
"release notes: dataloader",
"arm priority"
] | 5 | COLLABORATOR | TestDataLoader.test_segfault gives unexpected success on linux Aarch64
cc @malfet @snadampal @milpuz01 @aditew01 @nikhil-arm @fadara01 | true |
2,821,944,040 | [draft_export] Clear pending unbacked symbols when overriding mismatched fake kernels | yiming0416 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Summary:
When encountering a mismatched fake kernel that also creates unbacked symbols, draft export will fail with `PendingUnbackedSymbolNotFound` error.
Clearing `shape_env.pending_fresh_unbacked_symbols` fixes this issue.
Test Plan:
```
buck2 run mode/dev-nosan caffe2/test:test_export -- -r test_override_mismatched_fake_kernel_with_unbacked_symbols
```
Differential Revision: D68920990
| true |
2,821,943,702 | Make the CUTLASS swizzle options configurable and default to 2. | masnesral | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146088
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov | true |
2,821,916,154 | Expose ToIValueAllowNumbersAsTensors to TORCH_PYTHON_API so we can use it in monarch | manav-a | closed | [
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 4 | CONTRIBUTOR | Summary: TSIA
Test Plan: Tested up the stack but existing unittests
Reviewed By: suo
Differential Revision: D68917233
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,821,894,044 | Enhancements to dim_order API for Ambiguity Detection | Gasoonjia | open | [
"module: docs",
"triaged",
"module: memory format",
"module: python frontend"
] | 3 | CONTRIBUTOR | In our continuous effort to enhance the usability and performance of PyTorch, we are revisiting the `dim_order API`. This API, which provides insights into the physical layout of dense tensors in memory, has been instrumental in helping users optimize performance-critical code.
Recently, we've introduced functionality to raise exceptions when ambiguous dim orders occur, ensuring clearer and more predictable behavior. This post aims to provide a comprehensive introduction to `dim_order`, highlighting its capabilities and the recent improvements, so users can fully leverage its potential in their projects.
### What is Dim Order?
In PyTorch, the dim order of a dense tensor represents the order in which its dimensions are laid out in memory. This is also known as the physical layout of the tensor. Dim order is represented as a tuple of integers, where each integer corresponds to a dimension in the tensor. Note that for tensors where memory format is defined (e.g. 4D tensor with dense memory layout) there is a one-to-one mapping between dim order and memory format. For example, `torch.contiguous_format` for a 4D tensor would be represented as `(0, 1, 2, 3)`, indicating that the dimensions are stored in the order of batch, channel, height, and width. On the other hand, `torch.channels_last` would be represented as `(0, 2, 3, 1)`, indicating that the channel dimension is stored last.
Compared to `torch.memory_format`, `dim_order` provides an explicit and detailed representation of each dimension's meaning and is directly accessible as part of PyTorch's IR for dense tensor.
### Ambiguous Dim Order
In some cases, depending on the shape, a tensor may have multiple legal dim orders to describe its memory layout. We refer to such cases as dim order ambiguity. These cases are common in real-world scenarios. For instance, consider a tensor with a 4D size and a batch size equal to 1:
```
t1 = torch.zeros(1, a, b, c) # a, b, and c are arbitrary integers
```
Any of `(0, 1, 2, 3)`, `(1, 0, 2, 3)`, `(1, 2, 0, 3)`, and `(1, 2, 3, 0)` would be valid dim orders for `t1`. Therefore, we say that `t1` has an ambiguous dim order.
### Problems with Dim Order Ambiguity
The dimension order of each tensor should remain constant as long as the model graph remains unchanged. This means that the arrangement of data in memory for intermediate tensors should not be affected by the shape of the input tensors used for tracing, ensuring correctness and performance across different input shapes.
Let's revisit the previous example. When the batch size equals 1, any of the four tuples would be a valid dim order. However, when increasing the batch size to 2:
```
t1 = torch.zeros(2, a, b, c) # a, b, and c are arbitrary integers
```
Now `t1` has only one valid dim order `(0, 1, 2, 3)`, which differs from the previous `t1`.
This can lead to an unexpected behavior where the memory layout becomes tensor shape sensitive when exporting with an example input. However, it is expected that the memory layout remains invariant to the shape of the tensor, meaning that changing the shape should not cause the memory layout to change. Therefore, we need to address the ambiguity issue.
Our proposed solution for this is to enhance the `torch.Tensor.dim_order()` API to allow users to detect ambiguity for a given shape.
### How to Avoid or Detect Ambiguity?
We introduced an extra argument called `ambiguity_check` to detect and resolve ambiguity in three ways:
- Set `ambiguity_check` to `False` (default) to return one of the legal dim orders without verifying its uniqueness.
- Set `ambiguity_check` to `True` to raise a `RuntimeError` if the dim order cannot be uniquely determined.
- Pass a list of `torch.memory_format` to check if the tensor conforms to one of the provided formats, raising a `RuntimeError` if no unique match is found.
To achieve anti-ambiguity within user control, we recommend passing all expected memory formats to `ambiguity_check`. For example, if the current model only uses `torch.contiguous_format` and `torch.channels_last` memory formats, we suggest using the following approach to obtain an unambiguous dim order:
```
allowed_formats = [torch.contiguous_format, torch.channels_last]
dim_order = tensor.dim_order(ambiguity_check=allowed_formats) # tensor here is an arbitrary pytorch tensor
```
This approach ensures that the obtained dim order is unambiguous within the current context.
### Example Usage
Here's an example of how to use the `dim_order` API:
```
import torch
# Create a tensor with shape (2, 3, 5, 7)
tensor = torch.empty((2, 3, 5, 7))
# A. Get the dim order of the tensor
dim_order = tensor.dim_order()
print(dim_order) # Output: (0, 1, 2, 3)
# B. Check if the dim order is unique
try:
tensor.dim_order(ambiguity_check=True)
except RuntimeError as e:
print(e) # Output: The tensor does not have unique dim order...
# C. Specify a list of allowed memory formats to resolve the ambiguity
allowed_formats = [torch.contiguous_format, torch.channels_last]
dim_order = tensor.dim_order(ambiguity_check=allowed_formats)
print(dim_order) # Output: (0, 1, 2, 3)
```
Here’s the Colab link you can try: [dim_order_example.ipynb](https://colab.research.google.com/drive/1NBLW7k5sgz6mT8SpaphVxAo6w1qkjJol?usp=sharing)
By using the `dim_order` API, you can easily determine the physical layout of your tensors in memory and ensure that your models are optimized for performance.
### Best Practices
To get the most out of the `dim_order` API, we recommend the following best practices:
- Forward all allowed/expected memory formats to the `ambiguity_check` argument to return expected dim order and avoid ambiguity.
- If the expected memory format of the model is unclear, specify the `ambiguity_check` argument as `True` to ensure that ambiguities are detected.
- Verify that the returned dim order matches your expectations before using it in your model.
### Conclusion
In this post, we introduced the `dim_order` API, a new tool for understanding tensor memory layout in PyTorch. We showed how to use the API to determine the physical layout of tensors in memory and how to resolve ambiguity in dim order. We also provided best practices for using the API effectively. With the `dim_order` API, you can have a better understanding of current memory layout to optimize your models for performance and accuracy, leading to better results in your deep learning projects.
Great thanks @digantdesai, @larryliu0820 and @ezyang for continued support and discussion!
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @jamesr66a @albanD | true |
2,821,879,527 | [MPS] Fix regression in con-contig bitwise ops | malfet | closed | [
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 3 | CONTRIBUTOR | Caused by https://github.com/pytorch/pytorch/pull/128393 that change semantic of `needsGather`, which resulted in silent correctness errors on MacOS-15+ if output tensor is non-contiguous
Fixes https://github.com/pytorch/pytorch/issues/145203
| true |
2,821,877,593 | Libtorch CUDA 12.8 Test with --host-linker-script=use-lcs | tinglvv | closed | [
"triaged",
"open source",
"ciflow/binaries",
"topic: not user facing"
] | 7 | COLLABORATOR | https://github.com/pytorch/pytorch/issues/145570
Adding libtorch build to nightlies
Follow up for https://github.com/pytorch/pytorch/pull/145792
Testing @Skylion007 's suggestion in https://github.com/pytorch/pytorch/pull/145792#issuecomment-2625190049
cc @atalman @malfet @ptrblck @nWEIdia
| true |
2,821,875,744 | DISABLED test_device_type (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_device_type&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36431825889).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_device_type`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 3623, in test_device_type
self._test_device_type('cpu')
File "/var/lib/jenkins/workspace/test/test_jit.py", line 3620, in _test_device_type
self.checkScript(fn, [device])
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_device_type
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,821,875,665 | DISABLED test_module_parameters_and_buffers (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_module_parameters_and_buffers&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36432049153).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_module_parameters_and_buffers`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 13133, in test_module_parameters_and_buffers
class Strong(torch.jit.ScriptModule):
...<11 lines>...
return x + self.fc1(x) + self.fc1(x) + self.fc2(x)
File "/var/lib/jenkins/workspace/test/test_jit.py", line 13143, in Strong
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_module_parameters_and_buffers
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,821,875,587 | DISABLED test_script_non_tensor_args_outputs (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_non_tensor_args_outputs&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36432049153).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_non_tensor_args_outputs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 12773, in test_script_non_tensor_args_outputs
def test_script_non_tensor_args_outputs(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1209, in _script_impl
ast = get_jit_def(obj, obj.__name__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_non_tensor_args_outputs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,821,875,515 | DISABLED test_serialization_big_ints (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_serialization_big_ints&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36432049153).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_serialization_big_ints`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 15409, in test_serialization_big_ints
class M(torch.jit.ScriptModule):
...<14 lines>...
return x + (self.int32_max + self.int32_min) + (self.int64_max + self.int64_min)
File "/var/lib/jenkins/workspace/test/test_jit.py", line 15421, in M
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_serialization_big_ints
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,821,875,465 | DISABLED test_oneline_func (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_oneline_func&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36432049153).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_oneline_func`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 3117, in test_oneline_func
def test_oneline_func(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_oneline_func
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,821,875,372 | DISABLED test_ternary_static_if (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_ternary_static_if&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36432049153).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_ternary_static_if`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 6672, in test_ternary_static_if
script_model_1 = torch.jit.script(model1)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1150, in _script_impl
return torch.jit._recursive.create_script_module(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
obj, torch.jit._recursive.infer_methods_to_compile
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_recursive.py", line 555, in create_script_module
AttributeTypeIsSupportedChecker().check(nn_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_check.py", line 62, in check
source_lines = inspect.getsource(nn_module.__class__.__init__)
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_ternary_static_if
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,821,875,290 | DISABLED test_python_call_annotation (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_python_call_annotation&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36432049153).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_python_call_annotation`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 8087, in test_python_call_annotation
def test_python_call_annotation(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1209, in _script_impl
ast = get_jit_def(obj, obj.__name__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_python_call_annotation
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,821,875,215 | DISABLED test_request_bailout (__main__.TestScript) | pytorch-bot[bot] | closed | [
"high priority",
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 6 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_request_bailout&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36432049153).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_request_bailout`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 3132, in test_request_bailout
jitted = torch.jit.script(fct_loop)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1209, in _script_impl
ast = get_jit_def(obj, obj.__name__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_request_bailout
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @amjames | true |
2,821,863,520 | [dynamo][functions] Improve getattr on functions | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146116
* #146219
* #146283
* __->__ #146075
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,821,844,611 | export + aot_export_module on models with non-parameter/buffer tensor state show up as getattrs in the graph without meta['val'] fields | bdhirsh | closed | [
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 4 | CONTRIBUTOR | quick repro:
```
import torch
from torch._functorch.aot_autograd import aot_export_module
class Model(torch.nn.Module):
def __init__(self, n, k, device):
super().__init__()
self.weight = torch.randn(n, k, device=device)
self.bias = torch.randn(n, device=device)
def forward(self, a):
return (torch.nn.functional.linear(a, self.weight, self.bias),)
m = Model(64, 64, 'cuda')
inp = torch.randn(64, 64, device='cuda')
gm1 = torch.export.export(m, (inp,)).module()
gm2, graph_signature = aot_export_module(gm1, (inp,), trace_joint=False)
print([n for n in gm2.graph.nodes if 'val' not in n.meta])
```
This prints:
```
[_tensor_constant0, _tensor_constant1, output]
```
It looks like:
(1) since the weight/bias are not marked as params or buffers, aot_export_module turns them into tensor constants in the graph, that are `getattr()'d` inside of the graph as necessary
(2) those getattr calls do not have `node.meta['val']` fields. According to @desertfire, AOTI would like to use this pair of API's together, and the lack of `meta['val']` field breaks some inductor invariants
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,821,826,117 | Nccl update to 2.25.1 for cuda 12.4-12.8 | atalman | closed | [
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ci-no-td",
"no-runner-experiments"
] | 19 | CONTRIBUTOR | Should resolve: https://github.com/pytorch/pytorch/issues/144768
We use one common nccl version for cuda builds 12.4-12.8 : ``NCCL_VERSION=v2.25.1-1``
For CUDA 11.8 we use legacy ``NCCL_VERSION=v2.21.1-1``
We use pinned version of NCCL rather then submodule.
Move nccl location from ``third_party/nccl/nccl`` to ``third_party/nccl``
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @StrongerXi | true |
2,821,823,521 | add WaitCounter type interface and get rid of type errors | burak-turk | closed | [
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 11 | CONTRIBUTOR | Summary: as titled.
| true |
2,821,812,256 | Cap size of thread pool in select_algorithm to cpu count | masnesral | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146071
Summary: With changes from https://github.com/pytorch/pytorch/pull/144829, we can see more autotune configs and the size of the pool can get outta hand when using the cutlass backend.
See internal discussion at: https://fburl.com/workplace/7g4vz0zy
Test Plan: `python test/inductor/test_cutlass_backend.py`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,770,664 | [dynamo][enum] Trace through enum.py for enum construction | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146116
* #146219
* #146075
* __->__ #146070
* #146214
* #146258
* #146198
* #146062
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,821,762,164 | Set /NODEFAULTLIB:vcomp for MSVC when linking caffe2::mkl with libiomp5md.lib | taras-janea | closed | [
"module: build",
"module: windows",
"triaged",
"open source",
"ciflow/trunk",
"release notes: build",
"topic: bug fixes"
] | 5 | COLLABORATOR | Fixes:
- https://github.com/pytorch/pytorch/issues/113490
The PR sets `/NODEFAULTLIB:vcomp` link flag when linking caffe2::mkl with libiomp5md.lib.
The changes have been verified by checking build output with `VERBOSE=1`, for example:
```
C:\PROGRA~1\MICROS~1\2022\COMMUN~1\VC\Tools\MSVC\1442~1.344\bin\Hostx64\x64\link.exe /nologo caffe2\CMakeFiles\torch_global_deps.dir\__\torch\csrc\empty.c.obj /out:bin\torch_global_deps.dll /implib:lib\torch_global_deps.lib /pdb:bin\torch_global_deps.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /debug /INCREMENTAL:NO /NODEFAULTLIB:vcomp -LIBPATH:\lib -LIBPATH:\lib\intel64 -LIBPATH:\lib\intel64_win -LIBPATH:\lib\win-x64 C:\lib\mkl_intel_lp64.lib C:\lib\mkl_intel_thread.lib C:\lib\mkl_core.lib C:\lib\libiomp5md.lib kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib /MANIFEST:EMBED,ID=2
```
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | true |
2,821,749,208 | Error handling for launcher method in CachingAutotuner | manojks1999 | closed | [
"triaged",
"open source",
"function request",
"topic: not user facing",
"module: inductor"
] | 12 | NONE | Fixes #146018
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,730,712 | Expand inductor codegen dtype asserts, fix scan | eellison | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146067
We were codegening intermediary dtype asserts in some places but not all. expands assertions, fixes newly failing assertion in
`TORCHINDUCTOR_COMPILE_THREADS=1 TORCH_LOGS="output_code" PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_logcumsumexp_cuda_float16` for scan.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,675,273 | Failed to export user-defined Triton kernel when using strict=False | desertfire | open | [
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 4 | CONTRIBUTOR | Repro: (works fine if you change `strict=False` to `strict=True`)
```
import torch
import triton
from triton import language as tl
@triton.jit
def add_kernel(
in_ptr0,
in_ptr1,
out_ptr,
n_elements,
BLOCK_SIZE: "tl.constexpr",
):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(in_ptr0 + offsets, mask=mask)
y = tl.load(in_ptr1 + offsets, mask=mask)
output = x + y
tl.store(out_ptr + offsets, output, mask=mask)
class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, x):
out = torch.zeros_like(x[:, 4:])
# the slicing below creates two ReinterpretView
# instances: with offset=3 and offset=4
add_kernel[(10,)](
in_ptr0=x[:, 3:-1],
in_ptr1=x[:, 4:],
out_ptr=out,
n_elements=160,
BLOCK_SIZE=16,
)
return out
example_inputs = (
torch.randn(10, 20, device="cuda"),
)
ep = torch.export.export(Model(), example_inputs, strict=False)
```
Error:
```
Traceback (most recent call last):
File "/data/users/binbao/pytorch/test2.py", line 43, in <module>
ep = torch.export.export(Model(), example_inputs, strict=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/__init__.py", line 368, in export
return _export(
^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1044, in wrapper
raise e
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1017, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/exported_program.py", line 117, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 2079, in _export
return _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1044, in wrapper
raise e
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1017, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/exported_program.py", line 117, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1944, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1879, in _non_strict_export
aten_export_artifact = _to_aten_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1665, in _export_to_aten_ir_make_fx
gm, graph_signature = transform(_make_fx_helper)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1809, in _aot_export_non_strict
gm, sig = aot_export(wrapped_mod, args, kwargs=kwargs, **flags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1585, in _make_fx_helper
gm = make_fx(
^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/experimental/proxy_tensor.py", line 2232, in wrapped
return make_fx_tracer.trace(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/experimental/proxy_tensor.py", line 2170, in trace
return self._trace_inner(f, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/experimental/proxy_tensor.py", line 2141, in _trace_inner
t = dispatch_trace(
^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/_dynamo/eval_frame.py", line 749, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/experimental/proxy_tensor.py", line 1174, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/experimental/proxy_tensor.py", line 1730, in trace
res = super().trace(root, concrete_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/_symbolic_trace.py", line 832, in trace
(self.create_arg(fn(*args)),),
^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/experimental/proxy_tensor.py", line 1229, in wrapped
out = f(*tensors) # type:ignore[call-arg]
^^^^^^^^^^^
File "<string>", line 1, in <lambda>
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1488, in wrapped_fn
return tuple(flat_fn(*args))
^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/_functorch/_aot_autograd/utils.py", line 184, in flat_fn
tree_out = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 879, in functional_call
out = mod(*args[params_len:], **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/_symbolic_trace.py", line 810, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/experimental/proxy_tensor.py", line 1800, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/_symbolic_trace.py", line 528, in call_module
ret_val = forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/_symbolic_trace.py", line 803, in forward
return _orig_module_call(mod, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/export/_trace.py", line 1793, in forward
tree_out = mod(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/_symbolic_trace.py", line 810, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/experimental/proxy_tensor.py", line 1800, in call_module
return Tracer.call_module(self, m, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/_symbolic_trace.py", line 528, in call_module
ret_val = forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/fx/_symbolic_trace.py", line 803, in forward
return _orig_module_call(mod, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/nn/modules/module.py", line 1749, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/torch/nn/modules/module.py", line 1760, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/users/binbao/pytorch/test2.py", line 31, in forward
add_kernel[(10,)](
File "/home/binbao/local/miniconda3/envs/pytorch-3.11/lib/python3.11/site-packages/triton/runtime/jit.py", line 330, in <lambda>
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/binbao/local/miniconda3/envs/pytorch-3.11/lib/python3.11/site-packages/triton/runtime/jit.py", line 653, in run
kernel.run(grid_0, grid_1, grid_2, stream, kernel.function, kernel.packed_metadata, launch_metadata,
File "/home/binbao/local/miniconda3/envs/pytorch-3.11/lib/python3.11/site-packages/triton/backends/nvidia/driver.py", line 444, in __call__
self.launch(*args, **kwargs)
ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,821,652,008 | Turn on fx graph cache and automatic dynamic pgo local caches in fbcode | oulgen | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146065
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,626,053 | [PT2] Support add/remove passes in pre_grad | huxintong | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 17 | CONTRIBUTOR | Summary:
support the same functionality with acc_tracer disabled, add a new config for pre_grad add/remove_passes, at the front end it still uses the same interface
some minor updates in pre_grad passes to make sure the passes are run in desired order, after added passes, still run pass like remove_noops at the end
Test Plan: add new UT, please see stacked diff for add pass tests (TODO: update diff link)
Differential Revision: D68909278
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,620,190 | [wip] torch._dynamo.disable on the CA graph | xmfan | closed | [
"Stale",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146063
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @yf225 | true |
2,821,604,738 | [dynamo] Support frozenset({..}).__contains__ | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146116
* #146075
* #146070
* #146214
* #146198
* __->__ #146062
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,821,584,107 | [inductor][triton] Fix average pool nd for int64 dtype | kundaMwiza | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 8 | CONTRIBUTOR | The eager mode implementation of average pool nd returns an integer tensor if the input is also an integer tensor. This should also be preserved in inductor.
Fixes pytest -k test_comprehensive_nn_functional_avg_pool2d_cpu_int64 error: Triton compilation failed: triton_poi_fused_avg_pool2d_0
See WIP https://github.com/pytorch/pytorch/pull/145865#issuecomment-26200289890 to potentially enable such tests as they aren't enabled yet.
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,572,286 | Barebones flat_apply HOP | zou3519 | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146060
* #146059
This PR:
- adds pytree.register_constant for registering a class to be treated as
a constant by torch.compile/torch.fx
- adds a very barebones flat_apply HOP. This should be sufficient to get
mark_traceable working. A lot more work is necessary to get the custom
operator case working (when make_fx sees a custom operator with PyTree
arg types, it needs to emit a call to the flat_apply HOP).
- I expect the flat_apply HOP to change a lot, I want to ship this in
the current state to unblock the mark_traceable and custom ops
work.
Test Plan:
- It's kind of difficult to test the barebones flat_apply HOP "works" so
I added a really simple test.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,821,572,171 | Add torch.utils._pytree.register_dataclass | zou3519 | closed | [
"Merged",
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146060
* __->__ #146059
This is an API that registers a dataclass as a pytree node.
It directly calls torch.export.register_dataclass, but we should
eventually inline that implementation here. I want to use this API for
something in compile and feel weird calling
torch.export.register_dataclass.
Test Plan:
- tests | true |
2,821,557,769 | Turn on local caches for fbcode | oulgen | closed | [
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Differential Revision: [D68908317](https://our.internmc.facebook.com/intern/diff/D68908317/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,553,763 | Turn on local caches for fbcode | oulgen | closed | [
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Differential Revision: [D68908317](https://our.internmc.facebook.com/intern/diff/D68908317/)
In order to launch mega caching, we need local caches to be on.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,530,360 | RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. | walker-ai | open | [
"module: cuda",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
When I execute the simplest torch code it would throw Runtime Error:
```python
import torch
def main():
tensor_size = (1, 2)
data = torch.rand(tensor_size, dtype=torch.float32, device='cuda')
if __name__ == "__main__":
main()
```
```bash
Traceback (most recent call last):
File "/home/orin/tools/cuSZp/python/example-torch.py", line 66, in <module>
main()
File "/home/orin/tools/cuSZp/python/example-torch.py", line 13, in main
data = torch.rand(tensor_size, dtype=torch.float32, device='cuda')
RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
And the strange situation is when I type this in terminal, it sometimes shows error from time to time. For example, when I type `torch.tensor([1,2])`, it shows error above, and next I type it again, it works.
My PyTorch is build from source, its version is 2.3.1.
### Versions
```bash
Collecting environment information...
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 19.0.0git (https://github.com/llvm/llvm-project.git 10dc3a8e916d73291269e5e2b82dd22681489aa1)
CMake version: version 3.31.4
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:18:56) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.216-tegra-aarch64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-7
Off-line CPU(s) list: 8-11
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
Vendor ID: ARM
Model: 1
Model name: ARMv8 Processor rev 1 (v8l)
Stepping: r0p1
CPU max MHz: 2201.6001
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 2 MiB
L3 cache: 4 MiB
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.14.0
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] torch 2.3.1 pypi_0 pypi
[conda] torchvision 0.18.1 pypi_0 pypi
```
cc @ptrblck @msaroufim @eqy | true |
2,821,510,290 | [Win][CD] Install cmake and setuptools from PyPI | malfet | open | [
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 6 | CONTRIBUTOR | And also avoid repeating the same command over and over
| true |
2,821,509,653 | Check meta strides for expanded dims in effn_attn_bias | eellison | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 26 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146054
With the `_scaled_dot_product_efficient_attention.default`, we have lowering logic to realize the bias to specific alignment constraints. Some of the dims can be expanded, and we need to keep the stride of that dim to 0 to avoid materializing a larger tensor than we need. Previously, we had checked stride of tensor, but if it is not realized, that will not work. so we should check the strides of the meta as well.
Note: getting the exact of realizing/slicing/requiring_exact_strides was a little tricky. I commented to @exclamaforte on an example unable-to-fuse message you get if you do it incorrectly.
Fix for https://github.com/pytorch/pytorch/issues/145760
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,503,018 | Add fqn_modifier at loading_state_dict and unit test | mori360 | closed | [
"oncall: distributed",
"topic: not user facing"
] | 1 | CONTRIBUTOR | In Fusion model, users might change the state_dict keys by state_dict_hook
The load_state_dict APIs here won't call model.state_dict() so that the hooks won't be called to change the keys, causing the mismatch between fqn and state_dict keys.
The PR here suggests users to add how they would change the state_dict key prefix (they can name it, here we call "fqn_modifiers") by default
During loading state_dict, we have the prefix change during getting fqn so that they can be processed same as through state_dict hook.
For example:
There's a state_dict_hook:
```
def _state_dict_hook(self, destination, prefix, keep_vars):
"""Remove "embedding" from the original embedding in the state_dict
name. This keeps the orginal state dict name for the embedding
from before fusing with the FusionEmbedding.
[!Note] This update changes the order of the OrderedDict
"""
key = prefix + "embedding.weight"
new_key = prefix + "weight"
destination[new_key] = destination[key]
del destination[key]
```
In the dsd after this PR, we would skip "embedding." before "weight" if find the "fqn_modifiers" attribute at that module
```
def fqn_modifiers(self) -> Dict[str, str]:
return {
"weight": "embedding",
}
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @xmfan | true |
2,821,494,521 | Add manual override flag for core ATen op detection during bc check | soulitzer | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146101
* #145922
* #141842
* #141841
* __->__ #146052
Fixes https://github.com/pytorch/pytorch/issues/146049
Today the bc detection logic ignores allow_list for core ATen ops (A PR landed 4 months ago to enable this). The problem is that if I have a PR that removes an op, the script can no longer check whether that op is core ATen op (today we just error out).
With my fix: (1) conservatively assume core ATen op in such cases (2) allows the user to specify in their ALLOW_LIST entry that their op is not a core ATen op.)
Test plan:
- This is tested 2 PRs above
https://github.com/pytorch/pytorch/blob/016bdafdcbb22e1627e2018e53425d98c7eecd87/test/forward_backward_compatibility/check_forward_backward_compatibility.py#L129-L137
| true |
2,821,463,929 | inductor.config.descriptive_names = False is not actually supported (#145523) (#145523) | exclamaforte | closed | [
"fb-exported",
"Stale",
"ciflow/trunk",
"topic: docs",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6 | CONTRIBUTOR | Summary:
This config is not supported (it throws an error when set), and doesn't really make sense imo.
Approved by: https://github.com/eellison
Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/edf266e9bbbf6063f7c4a336ffb50234e11a0a82
Differential Revision: D68846308
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,821,428,531 | [export] Add distributed test | angelayi | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Reland https://github.com/pytorch/pytorch/pull/145886 | true |
2,821,419,776 | Check for core ATen opset schema BC errors when operator has been removed | soulitzer | closed | [
"oncall: pt2",
"oncall: export",
"module: core aten"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
For core aten operators, we will still check bc even if the op is on the allow_list. To determine whether we are a core aten op, we consult the Tags:
```
_, _, tags = torch._C._get_operation_overload(schema.name, schema.overload_name)
```
The problem is that we run this on the build with the PR applied, so in the case where the PR removes the operator, it would return None (and error on trying to unpack a None).
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @manuelcandales @SherlockNoMad | true |
2,821,371,356 | [binary builds] Anaconda. Remove dependency on conda environment for Windows nightly builds | atalman | closed | [
"oncall: releng",
"triaged",
"topic: binaries"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
Related to: https://github.com/pytorch/pytorch/issues/138506
Depends on: https://github.com/pytorch/pytorch/issues/145872
Windows Binary build is using conda:
https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/windows/condaenv.bat#L12
```
if "%%v" == "3.13" call conda create -n py!PYTHON_VERSION_STR! -y -c=conda-forge numpy=2.1.2 boto3 cmake ninja typing_extensions setuptools=72.1.0 python=%%v
...
```
While Windows Smoke tests are installing python from exe and using pip to install packages:
https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/windows/internal/smoke_test.bat#L39
```
set PYTHON_INSTALLER_URL=
if "%DESIRED_PYTHON%" == "3.13" set "PYTHON_INSTALLER_URL=https://www.python.org/ftp/python/3.13.0/python-3.13.0-amd64.exe"
....
if "%DESIRED_PYTHON%" == "3.13" %PYTHON_EXEC% -m pip install --pre numpy==2.1.2 protobuf
....
```
Refactor code to use same script for Binary Build and Smoke test, Installing python from exe rather then using conda.
### Versions
2.7.0 | true |
2,821,309,610 | [experimental] filter logs by subgraph | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 12 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146047
```
TORCH_LOGS="dynamo" TORCH_LOGS_TRACE_ID_FILTER="[1/0]" python r4.py
```
```
TORCH_LOGS="dynamo" TORCH_LOGS_TRACE_ID_FILTER="[0/0],[1/0_1]" python r4.py
``` | true |
2,821,270,280 | Dynamic Dims not getting produced right. | cptspacemanspiff | open | [
"oncall: pt2",
"export-triage-review",
"oncall: export"
] | 5 | NONE | ### 🐛 Describe the bug
So, I have been using torch.export, but ran into the annoying and confusing issue of the dynamic dims do not in all cases produce dynamic dims.
Specifically, I encountered this when dealing with sliced assignments in python, the relative size of the dynamic dimentions compared to the example makes a difference:
Example for dynamic dim [1 : 10]
* If example is size 1, it fails and errors out with an inferred constant.
* If example is size 9, it works
* If example is size 10, it fails, **_does not error out_**, but the graph does not have any dynamic dimentions (presumably because internally it inferred constant was 10).
A reproduction python code below shows this:
```python
from torch.export import export, Dim
import torch
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
self.register_buffer("cache", torch.zeros(10))
def forward(self, x):
self.cache[0 : x.shape[0]] = x
return x
max_size = 10
min_size = 1
dim = Dim(name="dim", min=min_size, max=max_size)
module = Foo()
# This works and is correct:
# exported_not_max_size
# ExportedProgram:
# class GraphModule(torch.nn.Module):
# def forward(self, b_cache: "f32[10]", x: "f32[s0]"):
# #
# sym_size_int_2: "Sym(s0)" = torch.ops.aten.sym_size.int(x, 0)
# # File: /home/nlong/execu-tools/python/tests/export_dynamic_size_bug.py:12 in forward, code: self.cache[0 : x.shape[0]] = x
# slice_1: "f32[s0]" = torch.ops.aten.slice.Tensor(b_cache, 0, 0, sym_size_int_2); b_cache = sym_size_int_2 = None
# copy_: "f32[s0]" = torch.ops.aten.copy_.default(slice_1, x); slice_1 = copy_ = None
# return (x,)
# Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.BUFFER: 3>, arg=TensorArgument(name='b_cache'), target='cache', persistent=True), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='x'), target=None)])
# Range constraints: {s0: VR[1, 10]}
exported_not_max_size = export(
module,
(
torch.zeros(
max_size - 1,
),
),
dynamic_shapes={"x": {0: dim}},
)
# This works, but is incorrect:
# exported_max_size
# ExportedProgram:
# class GraphModule(torch.nn.Module):
# def forward(self, b_cache: "f32[10]", x: "f32[10]"):
# # File: /home/nlong/execu-tools/python/tests/export_dynamic_size_bug.py:12 in forward, code: self.cache[0 : x.shape[0]] = x
# copy_: "f32[10]" = torch.ops.aten.copy_.default(b_cache, x); b_cache = copy_ = None
# return (x,)
# Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.BUFFER: 3>, arg=TensorArgument(name='b_cache'), target='cache', persistent=True), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='x'), target=None)])
# Range constraints: {}
exported_max_size = export(
module,
(
torch.zeros(
max_size,
),
),
dynamic_shapes={"x": {0: dim}},
)
# This fails with the message:
# Not all values of dim = L['x'].size()[0] in the specified range dim <= 10 are valid because dim was inferred to be a constant (1).
# exported_min_size = export(
# module,
# (
# torch.zeros(
# min_size,
# ),
# ),
# dynamic_shapes={"x": {0: dim}},
# )
print("---------------------------------------")
print("exported_not_max_size")
print(exported_not_max_size)
print("---------------------------------------")
print("exported_max_size")
print(exported_max_size)
print("---------------------------------------")
# print("exported_min_size")
# print(exported_min_size)
```
The silent failure is the biggest issue, that being said a better error message with the inferred constant linking to a blogpost on the topic would have saved me a lot of time.
Not sure if it is valid, but my ideal, would have a bright red warning saying this may fail if your dimensions are 0/1/max. Though I realize max may be a special case for this sliced copy.
### Versions
torch==2.6.0.dev20250104+cpu
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | true |
2,821,077,346 | Redundant collectives are deduplicated but stick around as dead code | lw | open | [
"oncall: distributed",
"triaged",
"oncall: pt2",
"module: dtensor"
] | 6 | CONTRIBUTOR | ### 🐛 Describe the bug
We're implementing our own tensor-parallel mode with a "compile-first" approach. We'd thus like to be able to write "inefficient" code, such as each of `wq`, `wk` and `wv` issuing its own all-gather of the activations (instead of using `PrepareModuleInput` to do it ahead of time), and we'd like torch.compile to optimize the resulting graph for us.
What we're observing is that indeed the three redundant all-gathers get merged into a single one. Concretely, it seems that one of these all-gathers is chosen, and the _usages_ of the _other_ all-gathers are modified to ingest the output of the chosen all-gather. The issue is that **these other all-gathers are _not_ removed from the graph!** They stick around as dead code (no usages) and are moved to the end of the graph. This is inefficient, as they are still executed, thus wasting time.
It appears that this choice is deliberate (see #131023 and #132341), motivated by supporting non-SPMD scenarios, as detailed in #130918. In that case, only some ranks were using the output of a collective, and if the other ranks removed it that'd cause a deadlock.
Our application is perfectly SPMD hence this design choice is problematic for us. We'd appreciate a way to change the compiler's behavior if we can inform it of such an assumption.
### Error logs
_No response_
### Versions
Nightly
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @tianyu-l @XilunWu | true |
2,821,041,642 | [AOTI] Support composed dynamic shape constraint | desertfire | closed | [
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146044
* #146043
Summary: Fixes https://github.com/pytorch/pytorch/issues/145500. When export takes a dynamic shape constraint as an expression containing a symbol, we should be able to solve the symbol at run time.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov | true |
2,821,041,485 | [AOTI] Refactor codegen_input_symbol_assignment | desertfire | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146044
* __->__ #146043
Summary: Extract the common logic for size and stride symbol generation.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov | true |
2,820,900,220 | binaries/inspect_gpu.cc:23:10: fatal error: caffe2/core/common_gpu.h: No such file or directory | svenstaro | closed | [
"module: build",
"caffe2",
"triaged",
"actionable"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
[Some time ago](https://github.com/pytorch/pytorch/commit/a6bae1f6db3bb86c521dd3c2417f42b8f5e8d705) `caffe2/core/common_gpu.h` was deleted. However, this is [still referenced and used](https://github.com/pytorch/pytorch/blob/894ef8c1e3b4745ffe1c50b6cd019af5fe2e9489/binaries/inspect_gpu.cc#L23) which is compiled in [`binaries/CMakeLists.txt`](https://github.com/pytorch/pytorch/blob/894ef8c1e3b4745ffe1c50b6cd019af5fe2e9489/binaries/CMakeLists.txt#L27). I'm not sure why pytorch CI didn't catch this but I'm convinced it can't work currently. Not sure what the proper fix is.
This can easily be reproduced when building pytorch with `BUILD_BINARY=ON USE_CUDA=ON`. You will then run into this:
```
binaries/inspect_gpu.cc:23:10: fatal error: caffe2/core/common_gpu.h: No such file or directory
```
Since pytorch has discontinued caffe2, I believe the source files in `binaries/` need to be written to not depend on it and they should be compiled as part of the CI.
### Versions
git
cc @malfet @seemethere | true |
2,820,875,839 | S390x nightly builds timeouts | AlekseiNikiforovIBM | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"test-config/default",
"ciflow/s390"
] | 8 | COLLABORATOR | Sometimes build timeouts at the end.
This should be fixed by increased timeout. | true |
2,820,854,522 | Poor performance of any() on tensors even worse on cuda vs cpu | vince62s | open | [
"module: performance",
"module: cuda",
"triaged"
] | 5 | NONE | The issue is that any() is waaaaaayyyyyy too slow compared to the LIST equivalent any(list)
here is a benchmark:
```python
import torch
import time
# Function to benchmark the 'any' operation on list and tensor
def benchmark_any_operation(size, true_elements_ratio):
# Create a list and tensor of the same size and with the same number of True elements
num_true_elements = int(size * true_elements_ratio)
# Create a list (Python list)
mylist = [True] * num_true_elements + [False] * (size - num_true_elements)
# Shuffle the list to mix True and False values
from random import shuffle
shuffle(mylist)
# Create a tensor (PyTorch tensor)
mytensor_cpu = torch.tensor(mylist)
mytensor_cuda = mytensor_cpu.cuda() if torch.cuda.is_available() else None
# Benchmark 'any' on the Python list
start_time = time.time()
list_result = any(mylist)
list_time = time.time() - start_time
# Benchmark 'any' on the PyTorch tensor (CPU)
start_time = time.time()
tensor_cpu_result = mytensor_cpu.any()
tensor_cpu_time = time.time() - start_time
# Benchmark 'any' on the PyTorch tensor (CUDA), only if CUDA is available
if mytensor_cuda is not None:
torch.cuda.synchronize()
start_time = time.time()
tensor_cuda_result = mytensor_cuda.any()
torch.cuda.synchronize()
tensor_cuda_time = time.time() - start_time
print(f"Tensor 'any()' on CUDA result: {tensor_cuda_result}, Time: {tensor_cuda_time:.6f} seconds")
else:
tensor_cuda_result = None
tensor_cuda_time = 0.0
# Print results
print(f"List 'any()' result: {list_result}, Time: {list_time:.6f} seconds")
print(f"Tensor 'any()' on CPU result: {tensor_cpu_result}, Time: {tensor_cpu_time:.6f} seconds")
# Example usage:
# Size of the list and tensor
size = 1000000 # 1 million elements
# Ratio of True elements
true_elements_ratio = 0.01 # 10% of elements are True
benchmark_any_operation(size, true_elements_ratio)
```
results are like:
```
Tensor 'any()' on CUDA result: True, Time: 0.016592 seconds
List 'any()' result: True, Time: 0.000005 seconds
Tensor 'any()' on CPU result: True, Time: 0.001372 seconds
```
Is there any way we can speed up aten::any (which seems to also use for some reason aten::_local_scalar_dense
cc @msaroufim @ptrblck @eqy | true |
2,820,764,232 | [AOTI] Remove AOTI_USE_CREATE_TENSOR_FROM_BLOB_V1 | desertfire | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Summary: The AOTI_USE_CREATE_TENSOR_FROM_BLOB_V1 macro was used to solve a FC issue and it can be removed now.
Test Plan: CI
Differential Revision: D68871245
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov | true |
2,820,756,621 | DISABLED test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36396616696).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 11241, in test_config_option_dont_assume_alignment_cudagraphs
res = fn_c(inp)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 570, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 11218, in fn
def fn(x):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 749, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1199, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 312, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 100, in g
return f(*args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 578, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1848, in forward
fw_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 492, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 686, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 463, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1229, in run
return compiled_fn(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 393, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 423, in cudagraphify
return manager.add_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2251, in add_function
return fn, fn(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1945, in run
out = self._run(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2053, in _run
out = self.run_eager(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2217, in run_eager
return node.run(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 631, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, refs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1750, in check_memory_pool
if torch._C._cuda_checkPoolLiveAllocations(device, pool_id, unique_storages):
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,820,663,973 | fix incorrect literal strings / accidental tuples | haampie | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo"
] | 6 | CONTRIBUTOR | * `expr,` is short for `(expr,)`
* literal strings over multiple lines need to escape the newline `\` or use `(...)`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,820,646,678 | torch/autograd/graph.py" torch.func.{grad, vjp, jacrev, hessian} don't yet support saved tensor hooks | arelkeselbri | open | [
"module: autograd",
"triaged",
"module: functorch"
] | 0 | NONE | ### 🚀 The feature, motivation and pitch
I'm working of implementing a memory featuring **test time learning** and I'm using functional calls (vmap) to isolate the weight's grads and **train them at inference time**.
However, I hit the following message.
torch/autograd/graph.py", line 415, in disable_saved_tensors_hooks [rank0]:[rank0]: torch._C._autograd._saved_tensors_hooks_disable(error_message) [rank0]:[rank0]: RuntimeError: torch.func.{grad, vjp, jacrev, hessian} **don't yet support saved tensor hooks**. Please open an issue with your use case.
Is this already solved in a newer version of torch?
Is there a workaround?
`
Exception has occurred: RuntimeError
torch.func.{grad, vjp, jacrev, hessian} don't yet support saved tensor hooks. Please open an issue with your use case.
File "./torchtitan/venv/lib/python3.11/site-packages/torch/autograd/graph.py", line 415, in disable_saved_tensors_hooks
torch._C._autograd._saved_tensors_hooks_disable(error_message)
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "./torchtitan/venv/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 47, in fn
with torch.autograd.graph.disable_saved_tensors_hooks(message):
File "./torchtitan/venv/lib/python3.11/site-packages/torch/_functorch/eager_transforms.py", line 1449, in grad_impl
results = grad_and_value_impl(func, argnums, has_aux, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./torchtitan/venv/lib/python3.11/site-packages/torch/_functorch/apis.py", line 399, in wrapper
return eager_transforms.grad_impl(func, argnums, has_aux, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./torchtitan/venv/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./torchtitan/venv/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
^^^^^^^^^^^
File "./torchtitan/venv/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
^^^^^^^^^^
File "./torchtitan/venv/lib/python3.11/site-packages/titans_pytorch/neural_memory.py", line 538, in store_memories
grads = self.per_sample_grad_fn(dict(weights_for_surprise), keys, adaptive_lr, values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./torchtitan/venv/lib/python3.11/site-packages/titans_pytorch/neural_memory.py", line 782, in forward
updates, next_store_state = self.store_memories(
^^^^^^^^^^^^^^^^^^^^
File "./torchtitan/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./torchtitan/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./torchtitan/torchtitan/models/llama/model.py", line 298, in forward
retrieved, mem_state = self.mem(x)
^^^^^^^^^^^
`
### Alternatives
_No response_
### Additional context
Memory module is coming from https://github.com/lucidrains/titans-pytorch
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @zou3519 @Chillee @samdow @kshitij12345 | true |
2,820,589,922 | ONNX export failing when using `symbolic` functions and scripting | AWilcke | open | [
"module: onnx",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
## Description
I am attempting to export an `onnx` model from torch which uses `symbolic` functions, as well as has some control flow. I believe this to be the same issue found in https://github.com/pytorch/pytorch/issues/113652. I have tried this on the newly released torch2.6, with and without `dynamo`.
## Reproduction
Here is a minimal script to reproduce this, adapted from the above issue
```python
from argparse import ArgumentParser
from typing import Any
import torch
class SubModelImpl(torch.autograd.Function):
@staticmethod
def forward(ctx: Any, x: torch.Tensor) -> torch.Tensor:
return x
@staticmethod
def symbolic(g, x: torch.Tensor) -> torch.Tensor:
return g.op("custom::Identity", x).setType(x.type())
class SubModel(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x: torch.Tensor) -> torch.Tensor:
return SubModelImpl.apply(x)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.submodel_trace = torch.jit.trace(
SubModel(), torch.randn((5,), dtype=torch.float32)
)
def forward(self, x: torch.Tensor) -> torch.Tensor:
if torch.tensor(x.size(0) == 1, dtype=torch.bool):
return x + 1
return self.submodel_trace(x)
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--dynamo", action="store_true")
args = parser.parse_args()
model = torch.jit.script(Model())
x = torch.randn((1, 1), dtype=torch.float32)
if args.dynamo:
kwargs = {
"dynamo": True,
"dynamic_shapes": {"x": {0: torch.export.Dim("batch")}},
}
else:
kwargs = {"dynamo": False, "dynamic_axes": {"x": {0: "batch"}}}
torch.onnx.export(model, (x,), "test.onnx", input_names=["x"], **kwargs)
```
## Errors
### Dynamo
When running this with `--dynamo`, I get a `segmentation fault (core dumped)`.
```
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export(..., strict=False)`... ❌
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export`...
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with `torch.export.export`... ❌
[torch.onnx] Obtain model graph for `RecursiveScriptModule([...]` with Torch Script...
[1] 324045 segmentation fault (core dumped)
```
### TorchScript
When running without dynamo, I get the following error
```
Torch IR graph at exception: graph(%x.1 : Float(*, 1, strides=[1, 1], requires_grad=0, device=cpu)):
%16 : Long(device=cpu) = prim::Constant[value={1}](), scope: Model::
%17 : Long(device=cpu) = prim::Constant[value={0}](), scope: Model::
%18 : Long(device=cpu) = prim::Constant[value={11}](), scope: Model::
%5 : NoneType = prim::Constant(), scope: Model::
%19 : Bool(device=cpu) = prim::Constant[value={0}](), scope: Model::
%7 : Long(device=cpu) = aten::size(%x.1, %17), scope: Model:: # /path/to/script.py:33:24
%8 : Bool(device=cpu) = aten::eq(%7, %16), scope: Model:: # /path/to/script.py:33:24
%9 : Tensor = aten::tensor(%8, %18, %5, %19), scope: Model:: # /path/to/script.py:33:11
%11 : Tensor = prim::If(%9), scope: Model:: # /path/to/script.py:33:8
block0():
%12 : Tensor = aten::add(%x.1, %16, %16), scope: Model:: # /path/to/script.py:34:19
-> (%12)
block1():
%13 : Tensor = ^SubModelImpl[inplace=0, module="__main__"]()(%x.1), scope: Model::/SubModel::submodel_trace # /path/to/venv/.venv/lib/python3.11/site-packages/torch/autograd/function.py:575:0
block0(%x : Float(5, strides=[1], requires_grad=0, device=cpu)):
%15 : Float(5, strides=[1], requires_grad=0, device=cpu) = aten::view_as(%x, %x) # /path/to/venv/.venv/lib/python3.11/site-packages/torch/autograd/function.py:575:0
-> (%15)
-> (%13)
return (%11)
Traceback (most recent call last):
File "/path/to/script.py", line 55, in <module>
torch.onnx.export(model, (x,), "test.onnx", input_names=["x"], **kwargs)
File "/path/to/venv/.venv/lib/python3.11/site-packages/torch/onnx/__init__.py", line 383, in export
export(
File "/path/to/venv/.venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 495, in export
_export(
File "/path/to/venv/.venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1428, in _export
graph, params_dict, torch_out = _model_to_graph(
^^^^^^^^^^^^^^^^
File "/path/to/venv/.venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1057, in _model_to_graph
graph = _optimize_graph(
^^^^^^^^^^^^^^^^
File "/path/to/venv/.venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 634, in _optimize_graph
_C._jit_pass_lint(graph)
RuntimeError: 0 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/ir.cpp":579, please report a bug to PyTorch. 14 not in scope
```
If I remove the `symbolic` method on `SubModelImpl` then the TorchScript export works, but obviously I want to use my custom symbolic method.
I appreciate any guidance or workarounds you can suggest.
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.9 (main, Aug 14 2024, 05:07:28) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-1360P
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU max MHz: 5000.0000
CPU min MHz: 400.0000
BogoMIPS: 5222.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualisation: VT-x
L1d cache: 448 KiB (12 instances)
L1i cache: 640 KiB (12 instances)
L2 cache: 9 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.6.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.7.0.84
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.20.1
[pip3] onnxscript==0.1.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchmetrics==1.4.3
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect | true |
2,820,348,379 | This lib does not have new types of loss functions like focal Loss | anshulkr04 | open | [
"feature",
"module: nn",
"triaged"
] | 2 | NONE | PyTorch currently lacks built-in support for certain modern loss functions like Focal Loss, which is useful for handling class imbalance in tasks such as object detection. Adding native implementations of such loss functions would improve usability and reduce the need for external implementations. Are there any plans to include Focal Loss and similar loss functions in future releases?
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | true |
2,820,347,291 | DISABLED test_number_augassign_bitwise_pow (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_number_augassign_bitwise_pow&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395402949).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_number_augassign_bitwise_pow`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 7092, in test_number_augassign_bitwise_pow
self.checkScript(func, (), optimize=True)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_number_augassign_bitwise_pow
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,347,083 | DISABLED test_pass (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pass&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395402949).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pass`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 13283, in test_pass
self.checkScript(foo, (True,))
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_pass
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,346,992 | DISABLED test_python_op_name (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_python_op_name&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395402949).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_python_op_name`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 15782, in test_python_op_name
def test_python_op_name(self):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 15785, in torch_dynamo_resume_in_test_python_op_name_at_15785
with self.assertRaisesRegex(RuntimeError, "randint"):
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 276, in __exit__
self._raiseFailure('"{}" does not match "{}"'.format(
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected_regex.pattern, str(exc_value)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 200, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: "randint" does not match "RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
"
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_python_op_name
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,346,879 | DISABLED test_module_copying (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_module_copying&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395402949).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_module_copying`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 13179, in test_module_copying
class Strong(torch.jit.ScriptModule):
...<6 lines>...
return self.weak(x)
File "/var/lib/jenkins/workspace/test/test_jit.py", line 13184, in Strong
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_module_copying
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,346,755 | DISABLED test_script_sequential_for (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_sequential_for&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395402949).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_sequential_for`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9379, in test_script_sequential_for
class Sub(torch.jit.ScriptModule):
...<6 lines>...
return self.weight + thing
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9384, in Sub
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_sequential_for
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,346,754 | DISABLED test_round (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_round&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395402949).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_round`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 13598, in test_round
self.checkScript(round_float, (1.5,))
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_round
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,346,101 | DISABLED test_nn_HingeEmbeddingLoss_no_batch_dim_sum (__main__.TestJitGeneratedModule) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nn_HingeEmbeddingLoss_no_batch_dim_sum&suite=TestJitGeneratedModule&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395322508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nn_HingeEmbeddingLoss_no_batch_dim_sum`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 16208, in do_test
check_against_reference(self, create_script_module, create_nn_module,
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_jit.py", line 93, in check_against_reference
outputs_test = self.runAndSaveRNG(func, nograd_inputs, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/common_jit.py", line 162, in runAndSaveRNG
results = func(*inputs, **kwargs)
File "/var/lib/jenkins/workspace/test/test_jit.py", line 16155, in create_script_module
module = make_module(script)
File "/var/lib/jenkins/workspace/test/test_jit.py", line 16149, in make_module
module = TheModule()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/_script.py", line 321, in init_then_script
] = torch.jit._recursive.create_script_module(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/_recursive.py", line 555, in create_script_module
AttributeTypeIsSupportedChecker().check(nn_module)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/_check.py", line 62, in check
source_lines = inspect.getsource(nn_module.__class__.__init__)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestJitGeneratedModule.test_nn_HingeEmbeddingLoss_no_batch_dim_sum
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.