id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2 values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4 values | body stringlengths 7 62.5k ⌀ | is_title bool 1 class |
|---|---|---|---|---|---|---|---|---|
2,820,346,058 | DISABLED test_script_sequential_multi_output_fail (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_sequential_multi_output_fail&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395322508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_sequential_multi_output_fail`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9450, in test_script_sequential_multi_output_fail
class Sub(torch.jit.ScriptModule):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9456, in Sub
def forward(self, thing):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_sequential_multi_output_fail
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,345,890 | DISABLED test_torch_any (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_torch_any&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395322508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_torch_any`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9310, in test_torch_any
self.checkScript(fn, (torch.randn(3, 4), ))
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_torch_any
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,345,792 | DISABLED test_fibb (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_fibb&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395322508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_fibb`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 5931, in test_fibb
self.checkScript(func, inputs, optimize=True)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_fibb
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,345,681 | DISABLED test_python_frontend (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_python_frontend&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395322508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_python_frontend`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 5864, in test_python_frontend
ast = torch.jit.frontend.get_jit_def(fn, fn.__name__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_python_frontend
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,820,345,586 | DISABLED test_onnx_export_huggingface_llm_models_with_kv_cache (__main__.DynamoExporterTest) | pytorch-bot[bot] | closed | [
"module: onnx",
"triaged",
"module: flaky-tests",
"skipped"
] | 1 | NONE | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_onnx_export_huggingface_llm_models_with_kv_cache&suite=DynamoExporterTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36395267034).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_onnx_export_huggingface_llm_models_with_kv_cache`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/urllib3/connectionpool.py", line 468, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/urllib3/connectionpool.py", line 463, in _make_request
httplib_response = conn.getresponse()
File "/opt/conda/envs/py_3.9/lib/python3.9/http/client.py", line 1377, in getresponse
response.begin()
File "/opt/conda/envs/py_3.9/lib/python3.9/http/client.py", line 320, in begin
version, status, reason = self._read_status()
File "/opt/conda/envs/py_3.9/lib/python3.9/http/client.py", line 281, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/opt/conda/envs/py_3.9/lib/python3.9/socket.py", line 716, in readinto
return self._sock.recv_into(b)
File "/opt/conda/envs/py_3.9/lib/python3.9/ssl.py", line 1275, in recv_into
return self.read(nbytes, buffer)
File "/opt/conda/envs/py_3.9/lib/python3.9/ssl.py", line 1133, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/urllib3/connectionpool.py", line 802, in urlopen
retries = retries.increment(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/urllib3/util/retry.py", line 552, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/urllib3/packages/six.py", line 770, in reraise
raise value
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/urllib3/connectionpool.py", line 716, in urlopen
httplib_response = self._make_request(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/urllib3/connectionpool.py", line 470, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/urllib3/connectionpool.py", line 358, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/onnx/exporter/test_hf_models_e2e.py", line 18, in test_onnx_export_huggingface_llm_models_with_kv_cache
_prepare_llm_model_gptj_to_test()
File "/var/lib/jenkins/workspace/test/onnx/exporter/test_hf_models_e2e.py", line 42, in _prepare_llm_model_gptj_to_test
model = transformers.GPTJForCausalLM.from_pretrained(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2942, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/configuration_utils.py", line 615, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/configuration_utils.py", line 644, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/configuration_utils.py", line 699, in _get_config_dict
resolved_config_file = cached_file(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/hub.py", line 389, in cached_file
resolved_file = hf_hub_download(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 860, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1009, in _hf_hub_download_to_cache_dir
_download_to_tmp_and_move(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1543, in _download_to_tmp_and_move
http_get(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 369, in http_get
r = _request_wrapper(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 301, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send
return super().send(request, *args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/requests/adapters.py", line 713, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: ("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\n\nTo execute this test, run the following from the base repo dir:\n python test/onnx/exporter/test_hf_models_e2e.py DynamoExporterTest.test_onnx_export_huggingface_llm_models_with_kv_cache\n\nThis message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0", '(Request ID: 8457b041-589d-45b7-90a1-ae255fb5ab4e)')
```
</details>
Test file path: `onnx/exporter/test_hf_models_e2e.py`
cc @clee2000 @wdvr | true |
2,820,081,071 | multi node Error when dist.destroy_process_group | YufangMo | open | [
"oncall: distributed",
"triaged",
"module: nccl"
] | 3 | NONE | ### 🐛 Describe the bug
I met a problem same with this one: https://discuss.pytorch.org/t/multi-node-error-on-process-destruction-cuda-error-invalid-device-ordinal/211990
I am testing my operator with P2P communicaition. When I test on single node (8 devices), every thing is fine. However, when use multi nodes, the error happens. (all the communication are fine, just cannot destroy and then node 0 keep waiting until timeout)
```python
local_rank = int(os.getenv("LOCAL_RANK"))
rank = int(os.getenv("RANK"))
dist.init_process_group(backend="nccl")
torch.cuda.set_device(local_rank)
ws = dist.get_world_size()
members = list(range(ws))
group = dist.new_group(ranks = members)
x = torch.full((2, 3), rank, device=local_rank)
if rank != 0:
dist.recv(x, src = rank - 1, group = group)
if rank != ws - 1:
dist.send(x, dst = rank + 1, group = group)
torch.cuda.synchronize()
dist.barrier(group=group)
if rank in members:
dist.destroy_process_group(group)
dist.destroy_process_group()
```
### Versions
torch 2.4.1 or 2.5.1 have the problem. However, when I use the nightly (2.7.0), everything seems to be fine. I wonder 1. any differences between 2.7.0's destroy and 2.5.1's? How can I fix my problem in my 2.5.1?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,820,025,892 | Request to add backward pass support for torch.special.gammainc | Holipori | closed | [] | 0 | NONE | ### 🚀 The feature, motivation and pitch
Hi,
I noticed that torch.special.gammainc doesn’t currently support the backward pass. It would be a big help for my project if this feature could be added. Would you mind looking into it?
Thanks a lot!
### Alternatives
_No response_
### Additional context
_No response_ | true |
2,819,880,023 | Use Magma-cuda 12.8 for libtorch | tinglvv | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | COLLABORATOR | https://github.com/pytorch/pytorch/issues/145570
Build failure for libtorch wheel
`CUDAContext.cpp:(.text+0x157): additional relocation overflows omitted from the output
/usr/bin/ld: failed to convert GOTPCREL relocation; relink with --no-relax
collect2: error: ld returned 1 exit status`
Unsure if this is related, fixing as a start | true |
2,819,839,208 | Improve error message for wrong number of arguments in CachingAutotuner | exclamaforte | open | [
"good first issue",
"triaged",
"better-engineering",
"actionable",
"ciflow/trunk",
"oncall: pt2",
"module: inductor",
"module: compile ux"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
In triton_heuristics.py, the launcher call looks like:
```python
launcher(
*args_with_constexprs,
**cloned_kwargs,
grid=grid,
stream=stream,
)
```
If the kernel has fewer arguments than was passed in, the error looks like this because the `args_with_constexprs` gets splatted and overwrites `grid`:
```
Traceback (most recent call last):
File "/home/gabeferns/org/debug/cat-125075/new-cat-code.py", line 302, in <module>
compiled_module_main('None', functools.partial(benchmark_compiled_module2, arg0_1))
File "/home/gabeferns/pt-envs/cat/torch/_inductor/wrapper_benchmark.py", line 402, in compiled_module_main
wall_time_ms = benchmark_compiled_module_fn(times=times, repeat=repeat) * 1000
File "/home/gabeferns/org/debug/cat-125075/new-cat-code.py", line 290, in benchmark_compiled_module2
return print_performance(fn, times=times, repeat=repeat)
File "/home/gabeferns/pt-envs/cat/torch/_inductor/utils.py", line 422, in print_performance
timings = torch.tensor([timed(fn, args, times, device) for _ in range(repeat)])
File "/home/gabeferns/pt-envs/cat/torch/_inductor/utils.py", line 422, in <listcomp>
timings = torch.tensor([timed(fn, args, times, device) for _ in range(repeat)])
File "/home/gabeferns/pt-envs/cat/torch/_inductor/utils.py", line 411, in timed
result = model(*example_inputs)
File "/home/gabeferns/org/debug/cat-125075/new-cat-code.py", line 289, in <lambda>
fn = lambda: call2([arg0_1])
File "/home/gabeferns/org/debug/cat-125075/new-cat-code.py", line 278, in call2
combined_kernel.run(arg0_1, buf0, buf1, 23445504, grid=grid(23445504), stream=stream0)
File "/home/gabeferns/pt-envs/cat/torch/_inductor/runtime/triton_heuristics.py", line 860, in run
self.autotune_to_one_config(*args, grid=grid, **kwargs)
File "/home/gabeferns/pt-envs/cat/torch/_inductor/runtime/triton_heuristics.py", line 737, in autotune_to_one_config
timings = self.benchmark_all_configs(*args, **kwargs)
File "/home/gabeferns/pt-envs/cat/torch/_inductor/runtime/triton_heuristics.py", line 711, in benchmark_all_configs
timings = {
File "/home/gabeferns/pt-envs/cat/torch/_inductor/runtime/triton_heuristics.py", line 712, in <dictcomp>
launcher: self.bench(launcher, *args, **kwargs)
File "/home/gabeferns/pt-envs/cat/torch/_inductor/runtime/triton_heuristics.py", line 594, in bench
return benchmarker.benchmark_gpu(kernel_call, rep=40)
File "/home/gabeferns/pt-envs/cat/torch/_inductor/runtime/benchmarking.py", line 39, in wrapper
return fn(self, *args, **kwargs)
File "/home/gabeferns/pt-envs/cat/torch/_inductor/runtime/benchmarking.py", line 243, in benchmark_gpu
_callable()
File "/home/gabeferns/pt-envs/cat/torch/_inductor/runtime/triton_heuristics.py", line 578, in kernel_call
launcher(
TypeError: launcher() got multiple values for argument 'grid'
```
A check to see if the number of arguments is the same as what's expected, or a better error msg would be good.
### Error logs
_No response_
### Versions
Current main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @aakhundov | true |
2,819,834,111 | DISABLED test_tensor_to_cpu (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_tensor_to_cpu&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36384918644).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_tensor_to_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 4702, in test_tensor_to_cpu
script_fn = torch.jit.script(to_cpu)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/_script.py", line 1209, in _script_impl
ast = get_jit_def(obj, obj.__name__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_tensor_to_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,834,073 | DISABLED test_ternary (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_ternary&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36387348201).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_ternary`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 6624, in test_ternary
self.checkScript(func, inputs_true, optimize=True)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_ternary
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,834,039 | DISABLED test_script_pad_sequence_pack_sequence (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_pad_sequence_pack_sequence&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36384918644).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_pad_sequence_pack_sequence`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9760, in test_script_pad_sequence_pack_sequence
with torch._jit_internal._disable_emit_hooks():
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9761, in torch_dynamo_resume_in_test_script_pad_sequence_pack_sequence_at_9760
self.checkScript(pad_sequence_func,
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_pad_sequence_pack_sequence
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,833,827 | DISABLED test_module_attrs (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_module_attrs&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36387205204).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_module_attrs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 15249, in test_module_attrs
class M(torch.jit.ScriptModule):
...<8 lines>...
return self.table[key] + self.x
File "/var/lib/jenkins/workspace/test/test_jit.py", line 15255, in M
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_module_attrs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,833,793 | DISABLED test_number_abs (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_number_abs&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36384721001).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_number_abs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 7012, in test_number_abs
def test_number_abs(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_number_abs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,833,749 | DISABLED test_return (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_return&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36387205204).
Over the past 3 hours, it has been determined flaky in 11 workflow(s) with 11 failures and 11 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_return`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 6790, in test_return
self.checkScript(no_return, [a], optimize=True)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_return
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,833,705 | DISABLED test_python_frontend_source_range (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_python_frontend_source_range&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36387205204).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_python_frontend_source_range`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 5867, in test_python_frontend_source_range
def test_python_frontend_source_range(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_python_frontend_source_range
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,833,658 | DISABLED test_module_apis (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_module_apis&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36384721001).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_module_apis`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 8833, in test_module_apis
mod = torch.jit.script(MyMod())
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1150, in _script_impl
return torch.jit._recursive.create_script_module(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
obj, torch.jit._recursive.infer_methods_to_compile
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_recursive.py", line 555, in create_script_module
AttributeTypeIsSupportedChecker().check(nn_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_check.py", line 62, in check
source_lines = inspect.getsource(nn_module.__class__.__init__)
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_module_apis
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,833,612 | DISABLED test_torch_pow (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_torch_pow&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36387205204).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_torch_pow`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 4877, in test_torch_pow
def test_torch_pow(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_torch_pow
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,833,572 | DISABLED test_torch_functional_tensordot_list (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_torch_functional_tensordot_list&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36387348201).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_torch_functional_tensordot_list`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9201, in test_torch_functional_tensordot_list
self.checkScript(tensordot_dims_list, (a, b, dims))
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1024, in getsource
lines, lnum = getsourcelines(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_torch_functional_tensordot_list
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @amjames | true |
2,819,833,532 | DISABLED test_int64_upsample3d_cuda_bfloat16 (__main__.TestTorchDeviceTypeCUDA) | pytorch-bot[bot] | open | [
"module: rocm",
"module: tests",
"triaged",
"module: flaky-tests",
"skipped"
] | 19 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_int64_upsample3d_cuda_bfloat16&suite=TestTorchDeviceTypeCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36378413998).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_int64_upsample3d_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_torch.py", line 185, in test_int64_upsample3d
torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/functional.py", line 4651, in interpolate
return torch._C._nn.upsample_nearest3d(input, output_size, scale_factors)
torch.OutOfMemoryError: HIP out of memory. Tried to allocate 56.25 GiB. GPU 0 has a total capacity of 63.98 GiB of which 55.80 GiB is free. Of the allocated memory 7.06 GiB is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_torch.py", line 187, in test_int64_upsample3d
self.fail(f"Unexpected exception raised: {e}")
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: Unexpected exception raised: HIP out of memory. Tried to allocate 56.25 GiB. GPU 0 has a total capacity of 63.98 GiB of which 55.80 GiB is free. Of the allocated memory 7.06 GiB is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_torch.py TestTorchDeviceTypeCUDA.test_int64_upsample3d_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_torch.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @mruberry @ZainRizvi @clee2000 @wdvr | true |
2,819,780,985 | nonzero_static with symint size | avikchaudhuri | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | CONTRIBUTOR | Summary: Previously `nonzero_static` would force specialization on the `size` argument. This PR enables it to be used with a dynamic `size` argument.
Test Plan: added test
Differential Revision: D68874784
| true |
2,819,773,726 | nonzero_static with symint size | avikchaudhuri | closed | [] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Previously `nonzero_static` would force specialization on the `size` argument. This PR enables it to be used with a dynamic `size` argument.
Differential Revision: [D68874784](https://our.internmc.facebook.com/intern/diff/D68874784/) | true |
2,819,771,068 | nonzero_static with symint size | avikchaudhuri | closed | [] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Previously `nonzero_static` would force specialization on the `size` argument. This PR enables it to be used with a dynamic `size` argument.
Differential Revision: [D68874784](https://our.internmc.facebook.com/intern/diff/D68874784/) | true |
2,819,760,751 | [ONNX] Create deprecation warning on dynamo_export | justinchuby | closed | [
"module: onnx",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: onnx",
"topic: deprecation",
"ci-no-td"
] | 24 | COLLABORATOR | Deprecation of `torch.onnx.dynamo_export`:
* [`torch/onnx/_internal/_exporter_legacy.py`](diffhunk://#diff-4d1eb96fe68ea904dcd1f8211318b9ff882dbfe4c3cb725ffc164b6c5a58b74cR83-R86): Added deprecation warnings to the `OnnxRegistry`, `ExportOptions`, `ONNXRuntimeOptions`, and `dynamo_export` functions, indicating that `torch.onnx.dynamo_export` is deprecated since version 2.6.0 and should be replaced with `torch.onnx.export(..., dynamo=True)`. [[1]](diffhunk://#diff-4d1eb96fe68ea904dcd1f8211318b9ff882dbfe4c3cb725ffc164b6c5a58b74cR83-R86) [[2]](diffhunk://#diff-4d1eb96fe68ea904dcd1f8211318b9ff882dbfe4c3cb725ffc164b6c5a58b74cR231-R234) [[3]](diffhunk://#diff-4d1eb96fe68ea904dcd1f8211318b9ff882dbfe4c3cb725ffc164b6c5a58b74cR442-R445) [[4]](diffhunk://#diff-4d1eb96fe68ea904dcd1f8211318b9ff882dbfe4c3cb725ffc164b6c5a58b74cR700-R703)
This PR also removed the `**_` kwarg on onnx.export such that users get an error when they supply an unexpected augument.
Updated to emit deprecation warning because it is more appropriate: https://docs.python.org/3/library/exceptions.html#DeprecationWarning | true |
2,819,753,071 | [ONNX] Delete `rename_dynamic_shapes_with_model_inputs` | titaiwangms | closed | [
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: bug fixes"
] | 6 | COLLABORATOR | Basically, this function brings more cons than pros.
It was nice to have an automation help users to convert top-level key of dynamic shapes to arg names. However, this function has a bug when the model input has the same amount as dynamic_shapes in coincidence:
```python
input_names
# 'input_ids', 'past_key_values.0.key', 'past_key_values.0.value', 'past_key_values.1.key', 'past_key_values.1.value', 'past_key_values.2.key', 'past_key_values.2.value', 'past_key_values.3.key', 'past_key_values.3.value', 'past_key_values.4.key', 'past_key_values.4.value', 'attention_mask', 'position_ids'
inspect.sig(model.forward).parameters
# mappingproxy(OrderedDict([('input_ids', <Parameter "input_ids: Optional[torch.LongTensor] = None">), ('past_key_values', <Parameter "past_key_values: Union[transformers.cache_utils.Cache, Tuple[Tuple[torch.Tensor]], NoneType] = None">), ('attention_mask', <Parameter "attention_mask: Optional[torch.FloatTensor] = None">), ('token_type_ids', <Parameter "token_type_ids: Optional[torch.LongTensor] = None">), ('position_ids', <Parameter "position_ids: Optional[torch.LongTensor] = None">), ('head_mask', <Parameter "head_mask: Optional[torch.FloatTensor] = None">), ('inputs_embeds', <Parameter "inputs_embeds: Optional[torch.FloatTensor] = None">), ('labels', <Parameter "labels: Optional[torch.LongTensor] = None">), ('use_cache', <Parameter "use_cache: Optional[bool] = None">), ('output_attentions', <Parameter "output_attentions: Optional[bool] = None">), ('output_hidden_states', <Parameter "output_hidden_states: Optional[bool] = None">), ('return_dict', <Parameter "return_dict: Optional[bool] = None">), ('cache_position', <Parameter "cache_position: Optional[torch.LongTensor] = None">)]))
```
In the above case, the given input_names is following onnx graph, while it has the same length as torch model forward call. This kind of case makes it difficult to detect, and automate for users.
On the other hand, the error message from torch.export.export is quite informative that I believe users will know how to go from there:
```python
import torch
class Model(torch.nn.Module):
def forward(self, x=None, y=None):
return x + y
dim = torch.export.Dim("x", min=1, max=6)
onnx_program = torch.export.export(
Model(),
(),
kwargs={"x": torch.randn(2, 3), "y": torch.randn(2, 3)},
dynamic_shapes={"custom_input_x": {0: dim}, "custom_input_y": {0: dim}},
)
# torch._dynamo.exc.UserError: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['x', 'y'] of `inputs`, but here they are ['custom_input_x', 'custom_input_y']. Alternatively, you could also ignore arg names entirely and specify `dynamic_shapes` as a list/tuple matching `inputs`. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
```
| true |
2,819,703,893 | Allow replacing unbacked with very large upperbound by returning no-op for FloorToInt(int) | ColinPeppler | closed | [
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 16 | CONTRIBUTOR | * Let's say x is an integer beyond 2^53 where Python floats lose precision i.e. can't increment by 1.
* Therefore, float(x) will lose precision and won't retain the exact value of x even though it's an integer.
* That means `FloorToInt(very_large_number)` will lose precision if we cast it to float
```
>>> int(float(1000000007999999992))
1000000008000000000
```
This means when we try to do this in set_replacement():
https://github.com/pytorch/pytorch/blob/32bb6f83d5e9819560c2a074a193740c989f765d/torch/fx/experimental/symbolic_shapes.py#L6011-L6019
We run into this:
```
TORCH_LOGS="+torch.fx.experimental.symbolic_shapes" pytest -s test_export.py -k test_replace_unbacked_with_very_large_upperbound
File "/data/users/colinpeppler/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6258, in _maybe_guard_rel
self._set_replacement(rhs, self._find(lhs), "trivial_rhs")
File "/data/users/colinpeppler/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6039, in _set_replacement
assert tgt_bound.issubset(
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <built-in function add>(*(FakeTensor(..., size=(2*s0,)), FakeTensor(..., size=(u0,))), **{}):
tgt_bound=VR[4, 1000000008000000000] not a subset of src_bound=VR[4, 1000000007999999992]
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146001
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv | true |
2,819,700,457 | [Inductor] Expand Identity ops prior to block pattern matching | blaine-rister | closed | [
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | # Feature
Inductor sometimes uses `Identity` functions to group various terms of an expression. While this is convenient in some scenarios, it can frustrate pattern matching. For example, when we're matching an indexing expression to tell if it can be represented as a block pointer, that analysis should be invariant to `Identity`'s.
This PR adds a few features to achieve this invariance.
- Create a new expansion mode `expr.expand(identity=True)`, which removes all `Identity` functions from the expression.
- Preprocess the expression with this expansion prior to pattern matching.
- Bonus: create a new test utility function called `dummy_graph()`, which creates a simple `GraphLowering`. This is useful for testing the pattern matcher, as we need to initialize `V.graph` before we can access `V.graph.sizevars`.
# Test plan
This PR adds a few new unit tests:
- Added a unit test specifically for `expr.expand(identity=True)`.
- Added a new unit test module for the block pattern matcher. Tested that we can correctly match some example patterns containing Identity ops.
I originally intended to add an end to end test compiling pointwise cat, and mapping the corresponding memory accesses to block pointers. However, it looks like that will take more work, since the [relevant code path](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/triton.py#L1306) disables block pointer analysis. It might be better to defer that to a future PR.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,819,679,521 | [CUDA] Optimize CUDA occupancy for indexing operators like index_select, index_add and index_reduce | YyWangCS | open | [
"triaged",
"open source",
"Stale",
"release notes: cuda"
] | 5 | NONE | ### Background
For indexing operations such as **torch.index_select**, **torch.index_add**, and **torch.index_reduce**, GPU performance is relatively low when handling large input sizes. On an A100 GPU, `torch.index_select` achieves only about 30% of the theoretical memory bandwidth for large inputs. Notably, `torch.index_select` and `torch.index_add` are among the most time-consuming operations in the forward and backward passes of `torch.nn.Embedding`, which is widely used in NLP and deep learning ranking models (DLRM).
Upon analysis, I identified the following line of code`dim3 largeIndexGrid(std::min(ceil_div(sourceTotalSize, (uint64_t)128), (uint64_t)(mpc * 8)))`. Here, the blockSize is fixed at 128, and the number of blocks is set to `mpc * 8`, where mpc represents the number of streaming multiprocessors (SMs). This configuration results in only 1024 threads per SM (128 × 8). However, on an A100 GPU, each SM supports up to 2048 concurrent threads, meaning that the theoretical occupancy is limited to 50%.
This commit improves grid dimension calculation by leveraging maxThreadsPerMultiProcessor and blockSize, aligning with best practices commonly used in other GPU kernels in PyTorch.
### Tests Performed
1. Correctness testing: All tests in `test/test_torch.py` passed, including numerous tests covering `torch.index_select`, `torch.index_add` and `torch.index_reduce`.
2. Performance testing: I run performance testing with the following [script ](https://github.com/YyWangCS/FairySpeed/blob/main/embedding/bench_index_ops.py)on A100, the performance number is as follows.
#### torch.index_select
| num_embedding | embedding_dim | input_size | kernel latency before optimization (µs) | kernel latency after optimization (µs) |
| ------------- | ------------- | ---------- | --------------------------------------- | -------------------------------------- |
| 1000000 | 128 | 307200 | 518.3 | 359.4 |
| 1000000 | 32 | 307200 | 141.4 | 97.3 |
| 1000000 | 128 | 204800 | 347.5 | 242.4 |
| 1000000 | 32 | 204800 | 96.2 | 66.2 |
| 128000 | 4096 | 4096 | 219.2 | 158.6 |
#### torch.index_add
| num_embedding | embedding_dim | input_size | kernel latency before optimization (µs) | kernel latency after optimization (µs) |
| ------------- | ------------- | ---------- | --------------------------------------- | -------------------------------------- |
| 1000000 | 128 | 307200 | 526.8 | 379.4 |
| 1000000 | 32 | 307200 | 143.4 | 103.3 |
| 1000000 | 128 | 204800 | 352.2 | 256.1 |
| 1000000 | 32 | 204800 | 98.9 | 69.9 |
| 128000 | 4096 | 4096 | 222.8 | 165.0 |
#### torch.index_reduce
| num_embedding | embedding_dim | input_size | kernel latency before optimization (µs) | kernel latency after optimization (µs) |
| ------------- | ------------- | ---------- | --------------------------------------- | -------------------------------------- |
| 1000000 | 128 | 307200 | 732.5 | 470.7 |
| 1000000 | 32 | 307200 | 197.5 | 126.6 |
| 1000000 | 128 | 204800 | 490.8 | 316.4 |
| 1000000 | 32 | 204800 | 133.7 | 86.1 |
### Reference
[Performance Optimization of Embedding Computation on GPU Part 1: GPU Occupancy Optimization](https://yywangcs.notion.site/Performance-Optimization-of-Embedding-Computation-on-GPU-Part-1-GPU-Occupancy-Optimization-178fc9f5d805800e91b6d4490afcc665) | true |
2,819,677,000 | [DCP] Remove all-gather of state dict keys | kwen2501 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145998
The original `_all_gather_keys` call was for a safety check, but could be costly as things scale, and it blocks CPU.
Instead, we make it clear in the documentation that the `state_dict` passed to the `load` API should have same set of keys, otherwise the API may hang.
In addition, we move the check to a utility function: `utils.assert_same_keys`. User uncertain about state dict unity can optionally call this API to check.
Resolves #145965 (as a workaround).
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,819,664,251 | DISABLED test_aoti_eager_override_registration_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | pytorch-bot[bot] | closed | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_override_registration_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36371802617).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_override_registration_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1255, in test_aoti_eager_override_registration
res_array.append(getattr(torch, unary_op_name)(x))
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_aoti_eager_override_registration_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,819,657,638 | [CUDA][CUDA Graphs] Fix debug mode warning message | eqy | closed | [
"module: cuda",
"open source",
"Merged",
"module: cuda graphs",
"ciflow/trunk",
"topic: not user facing"
] | 15 | COLLABORATOR | The real method is `enable_debug_mode()`, `_cuda_enable_graphs_debug_mode` does not exist.
cc @ptrblck @msaroufim @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng | true |
2,819,656,752 | torch.jit.trace does not move tensors to GPU. | johan-irreverentlabs | open | [
"oncall: jit"
] | 0 | NONE | ### 🐛 Describe the bug
When running a traced function on GPU when it was traced on CPU, tensors are not properly moved to the GPU. Here is a minimal repro:
```
import torch
import math
def timestep_embedding(timesteps, dim):
freqs = torch.arange(start=0, end=dim, dtype=torch.float32, device=timesteps.device)
return timesteps[:, None] * freqs[None]
def test_time_embedding():
dim = torch.tensor(256).to("cpu")
t = torch.zeros((1)).long().to("cpu")
script = torch.jit.trace(timestep_embedding, (t, dim))
print("Running original function with CPU inputs...")
_ = timestep_embedding(t, dim)
print("Running scripted function with CPU inputs...")
_ = script(t, dim)
print("Running original function with GPU inputs...")
_ = timestep_embedding(t.to("cuda"), dim.to("cuda"))
print("Running scripted function with GPU inputs...")
_ = script(t.to("cuda"), dim.to("cuda"))
print("Success")
if __name__ == "__main__":
test_time_embedding()
```
The output is:
```
Running original function with CPU inputs...
Running scripted function with CPU inputs...
Running original function with GPU inputs...
Running scripted function with GPU inputs...
Traceback (most recent call last):
File "/irreverent-ml/inference/repro.py", line 24, in <module>
test_time_embedding()
File "/irreverent-ml/inference/repro.py", line 20, in test_time_embedding
_ = script(t.to("cuda"), dim.to("cuda"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
/irreverent-ml/inference/repro.py(7): timestep_embedding
/usr/local/lib/python3.12/dist-packages/torch/jit/_trace.py(764): _trace_impl
/usr/local/lib/python3.12/dist-packages/torch/jit/_trace.py(1000): trace
/irreverent-ml/inference/repro.py(12): test_time_embedding
/irreverent-ml/inference/repro.py(24): <module>
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
Forcing timesteps to the same device after unsqeezing somehow fixes the problem, but I don't know why:
```
def timestep_embedding(timesteps, dim):
freqs = torch.arange(start=0, end=dim, dtype=torch.float32, device=timesteps.device)
return timesteps[:, None].to(timesteps.device) * freqs[None]
```
### Versions
```
root@research-vm-0:/irreverent-ml# wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
--2025-01-30 00:49:30-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24353 (24K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[================================================================================>] 23.78K --.-KB/s in 0.007s
2025-01-30 00:49:30 (3.57 MB/s) - ‘collect_env.py’ saved [24353/24353]
Collecting environment information...
PyTorch version: 2.6.0a0+ecf3bae40a.nv25.01
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1020-gcp-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cudnn-frontend==1.9.0
[pip3] nvtx==0.2.5
[pip3] onnx==1.17.0
[pip3] optree==0.14.0
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-triton==3.1.0+cf34004b8.internal
[pip3] rotary-embedding-torch==0.8.6
[pip3] torch==2.6.0a0+ecf3bae40a.nv25.1
[pip3] torch-dct==0.1.6
[pip3] torch_tensorrt==2.6.0a0
[pip3] torchmetrics==1.6.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.20.0a0
[conda] Could not collect
```
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,819,649,063 | [dynamo][dicts] Support construction of types.MappingProxyType | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146075
* #146070
* #146062
* #145989
* __->__ #145994
* #145987
* #145986
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,819,640,324 | [inductor] Add typing to common.CSE | jansel | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 20 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146226
* #146225
* __->__ #145993
* #145916
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,819,631,806 | fix indirect broadcast | FindHao | open | [
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going"
] | 11 | MEMBER | Fixes #142250
```bash
python run.py --op embedding --mode fwd --precision fp32 --metrics latency,speedup --csv
```
The performance data is collected as below.
| (B, T, D, V) | latency | | | | speedup | | |
| ----------------------- | ------- | ----- | ------------------- | ------------------ | ------- | ------------------- | ------------------ |
| | torch | liger | inductor-before-fix | inductor-after-fix | liger | inductor-before-fix | inductor-after-fix |
| (32, 512, 768, 1024) | 0.086 | 0.029 | 0.029 | 0.029 | 2.954 | 3.014 | 3.023 |
| (32, 512, 768, 2048) | 0.088 | 0.032 | 0.032 | 0.032 | 2.783 | 2.743 | 2.753 |
| (32, 512, 768, 4096) | 0.091 | 0.034 | 0.037 | 0.035 | 2.642 | **2.444** | 2.575 |
| (32, 512, 768, 8192) | 0.095 | 0.039 | 0.042 | 0.040 | 2.423 | **2.236** | 2.387 |
| (32, 512, 768, 16384) | 0.099 | 0.045 | 0.047 | 0.045 | 2.224 | **2.117** | 2.213 |
| (32, 512, 768, 32768) | 0.102 | 0.048 | 0.050 | 0.049 | 2.136 | **2.059** | 2.103 |
| (32, 512, 768, 65536) | 0.105 | 0.050 | 0.052 | 0.051 | 2.092 | 2.027 | 2.053 |
| (32, 512, 768, 131072) | 0.107 | 0.052 | 0.052 | 0.052 | 2.062 | 2.053 | 2.046 |
| (8, 2048, 4096, 1024) | 0.431 | 0.144 | 0.173 | 0.137 | 3.002 | **2.495** | 3.145 |
| (8, 2048, 4096, 2048) | 0.459 | 0.153 | 0.204 | 0.149 | 2.995 | **2.249** | 3.075 |
| (8, 2048, 4096, 4096) | 0.484 | 0.168 | 0.226 | 0.166 | 2.883 | **2.139** | 2.906 |
| (8, 2048, 4096, 8192) | 0.495 | 0.189 | 0.238 | 0.190 | 2.615 | **2.084** | 2.608 |
| (8, 2048, 4096, 16384) | 0.506 | 0.215 | 0.246 | 0.216 | 2.355 | **2.063** | 2.349 |
| (8, 2048, 4096, 32768) | 0.513 | 0.235 | 0.249 | 0.234 | 2.184 | **2.063** | 2.195 |
| (8, 2048, 4096, 65536) | 0.516 | 0.245 | 0.250 | 0.244 | 2.110 | 2.066 | 2.117 |
| (8, 2048, 4096, 131072) | 0.516 | 0.252 | 0.251 | 0.250 | 2.051 | 2.059 | 2.067 |
When building memory dependencies for a scheduler node, we check if this memory dependency is an indirect broadcast (i.e., indirect load indices from a tensor via one dimension and broadcasting them to another dimension to index the current memory dependency). If so, we record the dimensions for the pairs of (indirect load dimension, broadcast dimension). When deciding whether tiling needs to be applied, this information is used to generate a new loop order for the dimensions. If the loop order differs from the current one, we apply the new loop order and apply tilings if possible.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,819,630,361 | Add buffers to parameterizaiton rule | tugsbayasgalan | closed | [
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145991
Differential Revision: [D68959513](https://our.internmc.facebook.com/intern/diff/D68959513) | true |
2,819,629,559 | [Profiler] Add Full PG ranks to Metadata | sraikund16 | closed | [
"enhancement",
"fb-exported",
"release notes: profiler"
] | 7 | CONTRIBUTOR | Summary: We only add a shortened list of PG ranks if there is a job distributed across multiple nodes. Let's add the PG ranks to the JSON metadata
Test Plan:
https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/dynocli/devvm2185.cco0.facebook.com/rank-0.Jan_29_16_19_52.2285734.pt.trace.json.gz&bucket=gpu_traces
{F1974810517}
Differential Revision: D68867518
| true |
2,819,628,496 | [dynamo][polyfills]Support getrecursionlimit | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146075
* #146070
* #146062
* __->__ #145989
* #145994
* #145987
* #145986
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,819,626,116 | [audio hash update] update the pinned audio hash | pytorchupdatebot | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 12 | COLLABORATOR | This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash. | true |
2,819,610,982 | [dynamo][functions] Support `id` on function | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146075
* #146070
* #146062
* #145989
* #145994
* __->__ #145987
* #145986
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,819,604,859 | [dynamo][dicts] Raise exception on pop | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146075
* #146070
* #146062
* #145989
* #145994
* #145987
* __->__ #145986
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,819,598,877 | [Utilization] Convert timestamp to str for datetime64 | yangw-dev | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"suppress-bc-linter"
] | 5 | CONTRIBUTOR | Convert all timestamp(float) to int timestamp during data pipeline for db type datetime64.
float does not work when try to insert into clickhouse using jsonExtract.
| true |
2,819,595,531 | Autotuning failure: `Triton Error [CUDA]: invalid argument` | bhack | closed | [
"high priority",
"oncall: pt2",
"export-triage-review",
"oncall: export",
"module: aotinductor"
] | 33 | CONTRIBUTOR | ### 🐛 Describe the bug
`aoti_compile_and_package` is failing on the just released `2.6.0` on autotuning
Exactly the same code is working with the last 2.6.0 on the nightly channel `20050104`
I cannot share the code but it is fully reproducible so let me know if you want any other extra log.
### Error logs
```python
Failed to run autotuning code block: Triton Error [CUDA]: invalid argument
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/codegen/wrapper.py", line 1237, in generate_and_run_autotune_block
exec(tuning_code, scope)
File "<string>", line 19027, in <module>
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1079, in run
return launcher(
^^^^^^^^^
File "<string>", line 13, in launcher
File "/opt/conda/lib/python3.11/site-packages/triton/backends/nvidia/driver.py", line 444, in __call__
self.launch(*args, **kwargs)
RuntimeError: Triton Error [CUDA]: invalid argument
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
aoti_compile_and_package(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/__init__.py", line 122, in aoti_compile_and_package
return aot_inductor_minifier_wrapper(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/debug.py", line 751, in aot_inductor_minifier_wrapper
raise e
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/debug.py", line 733, in aot_inductor_minifier_wrapper
return func(
^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/__init__.py", line 152, in _aoti_compile_and_package_inner
aoti_files = aot_compile(gm, args, kwargs, options=inductor_configs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/__init__.py", line 226, in aot_compile
return compile_fx_aot(
^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1361, in compile_fx_aot
compiled_artifacts = compile_fx(
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1552, in compile_fx
return compile_fx(
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1595, in compile_fx
return compile_fx(
^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1856, in compile_fx
return inference_compiler(unlifted_gm, example_inputs_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 675, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1015, in codegen_and_compile
code, linemap = graph.codegen_with_cpp_wrapper()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1867, in codegen_with_cpp_wrapper
return self.codegen()
^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1975, in codegen
result = self.wrapper_code.generate(self.is_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/codegen/cpp_wrapper_gpu.py", line 291, in generate
return super().generate(is_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/codegen/cpp_wrapper_cpu.py", line 842, in generate
return super().generate(is_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/codegen/wrapper.py", line 1132, in generate
return self._generate(is_inference)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/codegen/wrapper.py", line 1183, in _generate
self.generate_and_run_autotune_block()
File "/opt/conda/lib/python3.11/site-packages/torch/_inductor/codegen/wrapper.py", line 1239, in generate_and_run_autotune_block
raise RuntimeError(f"Failed to run autotuning code block: {e}") from e
RuntimeError: Failed to run autotuning code block: Triton Error [CUDA]: invalid argument
```
### Versions
2.6.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | true |
2,819,567,544 | when pip installing torch, HTTP status server error (500 Internal Server Error) for url (https://download.pytorch.org/whl/cu124/pycparser/) | rbavery | closed | [
"needs reproduction",
"oncall: releng"
] | 5 | NONE | ### 🐛 Describe the bug
I'm pip installing pytorch in my package in an Ubuntu 22.04 github runner CI environment
```
uv pip install -e . --extra-index-url https://download.pytorch.org/whl/cu124 --index-strategy unsafe-best-match
```
I get this error
```
#15 11.33 error: Failed to fetch: `[https://download.pytorch.org/whl/cu124/pycparser/`](https://download.pytorch.org/whl/cu124/pycparser/%60)
#15 11.33 Caused by: HTTP status server error (500 Internal Server Error) for url (https://download.pytorch.org/whl/cu124/pycparser/)
```
I reran the failed job and it gets past this. But I'm a bit concerned about the 500 error since we depend on installing Pytorch for our user environments.
possibly related issue https://github.com/pytorch/pytorch/issues/14701
### Versions
N/A ? | true |
2,819,559,767 | [dynamo] remove always-failing eval_frame.c debug check | williamwen42 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145603
* __->__ #145982
* #145981
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,819,559,678 | [dynamo] disable eval_frame callback in _TorchDynamoContext __enter__/__exit__ | williamwen42 | closed | [
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145603
* #145982
* __->__ #145981
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,819,548,865 | config: Support str env variables | c00w | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145980
Summary:
This allows us to use environment variables to set string values. We've added
tests for the specific functionality implemented here. Note that we already
accidentally started setting up configs to use this, so we're just adding the
feature.
Additionally, we're not fully validating the underlying type when we set the
value (and in general, it's more difficult than we would like to do this). Let
me know if people feel strongly, and we can add a PR to do this. | true |
2,819,525,404 | [draft_export] better stack logging for strict mode | pianpwk | closed | [
"Stale",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export"
] | 3 | CONTRIBUTOR | Strict-mode draft export tends to log unhelpful stack traces for guards/data-dependent errors, relying on `CapturedTraceback.extract()`, which is only accurate for non-strict. For dynamo, it's better to use `TracingContext.extract_stack()` and fallback to the former when this is empty, avoiding traces pointing to the top-level export call, or lambdas (in the case of `torch._check` calls).
e.g. before, for `test_draft_export.py -k test_offsets`:
```
This occurred at the following stacktrace:
File /data/users/pianpwk/pytorch/test/export/test_draft_export.py, lineno 259, in test_offsets:
`ep, report = draft_export(M(), inp, strict=True)`
```
after:
```
This occurred at the following stacktrace:
File /data/users/pianpwk/pytorch/test/export/test_draft_export.py, lineno 254, in forward:
`if a == 0:`
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,819,469,655 | What is the recommended way to use Distributed Checkpointing Save/Load with HSDP? | gkroiz | open | [
"oncall: distributed",
"triaged",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 12 | NONE | ### 🐛 Describe the bug
There are torch distributed checkpointing examples in [torch/distributed/checkpoint/examples](https://github.com/pytorch/pytorch/tree/main/torch/distributed/checkpoint/examples). All of these examples use FSDP. Running these examples out of the box has no issues, the loaded checkpoint state matches the saved checkpoint state. However, when I convert these examples to run HSDP instead of FSDP, I notice that loaded state no longer matches the saved state.
How I am converting from FSDP to HSDP:
```
model = FSDP(
torch.nn.Linear(4, 4).cuda(dist.get_rank()),
device_mesh=mesh,
sharding_strategy=ShardingStrategy.HYBRID_SHARD
)
```
[Link](https://gist.github.com/gkroiz/fcf5ed19665bc09475057f8bf626e853) to gist of updated [torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py](https://github.com/pytorch/pytorch/blob/main/torch/distributed/checkpoint/examples/fsdp_checkpoint_example.py) with HSDP modifications and printed output.
I also made similar changes to [torch/distributed/checkpoint/examples/stateful_example.py](https://github.com/pytorch/pytorch/blob/main/torch/distributed/checkpoint/examples/stateful_example.py) and saw the same discrepancies between saved and loaded state.
Either (1) I'm setting up HSDP + distributed checkpointing incorrectly or (2) there is a bug with distributed checkpointing. Assuming (1), what is the correct way to set up HSDP + distributed checkpointing?
### Versions
```
my_vm:/workspace# python collect_env.py
/usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too old to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using `python3 -m pip install --upgrade 'optree>=0.13.0'`.
warnings.warn(
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.6.44+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.5.1
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvidia-pytriton==0.5.12
[pip3] nvtx==0.2.5
[pip3] onnx==1.17.0
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.12.1
[pip3] pynvjitlink==0.2.3
[pip3] pynvjitlink-cu12==0.4.0
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.0.0+989adb9a2
[pip3] torch==2.6.0
[pip3] torch-tensorrt==2.5.0a0
[pip3] torchaudio==2.6.0
[pip3] torchdiffeq==0.2.4
[pip3] torchmetrics==1.5.1
[pip3] torchprofile==0.0.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[pip3] torchx==0.7.0
[pip3] triton==3.2.0
[pip3] tritonclient==2.51.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,819,444,580 | A bunch of fft ops fails the size/strides assert | shunting314 | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
This is exposed by https://github.com/pytorch/pytorch/pull/145904
Test for fft_hfftn fails a size/stride assertion. This most likely means the op has inconsistent meta v.s. eager implementation regarding the output strides.
Repro:
```
TORCHINDUCTOR_SIZE_ASSERTS=1 PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_fft_hfftn_cuda_float16
```
Similar failures happens for
- fft_fft (https://github.com/pytorch/pytorch/actions/runs/13023114885/job/36328152122 )
- fft_ifft
- fft_ihfft
- fft_ihfft2
- fft_rfft
- fft_rfft2
But I could not repro those locally.
### Error logs
_No response_
### Versions
.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @aakhundov | true |
2,819,388,881 | DISABLED test_script_outputs (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"module: flaky-tests",
"skipped"
] | 3 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_outputs&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36368721106).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_outputs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9816, in test_script_outputs
def test_script_outputs(self):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9817, in torch_dynamo_resume_in_test_script_outputs_at_9817
with self.assertRaisesRegex(RuntimeError, "cannot be used as a tuple"):
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 276, in __exit__
self._raiseFailure('"{}" does not match "{}"'.format(
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected_regex.pattern, str(exc_value)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 200, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: "cannot be used as a tuple" does not match "RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
"
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_outputs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu | true |
2,819,388,806 | DISABLED test_non_final_return (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"module: flaky-tests",
"skipped"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_non_final_return&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36368721106).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_non_final_return`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 14007, in test_non_final_return
self.checkScript(func, (torch.tensor(2.5 + i),))
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 483, in checkScript
source = textwrap.dedent(inspect.getsource(script))
~~~~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_non_final_return
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu | true |
2,819,388,725 | DISABLED test_script_annotation (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"module: flaky-tests",
"skipped"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_script_annotation&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36368721106).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_script_annotation`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 4870, in test_script_annotation
def test_script_annotation(self):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1209, in _script_impl
ast = get_jit_def(obj, obj.__name__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_script_annotation
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu | true |
2,819,388,652 | DISABLED test_reassign_module_lhs (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"module: flaky-tests",
"skipped"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reassign_module_lhs&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36368721106).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reassign_module_lhs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 11303, in test_reassign_module_lhs
def test_reassign_module_lhs(self):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 11304, in torch_dynamo_resume_in_test_reassign_module_lhs_at_11304
with self.assertRaisesRegex(RuntimeError, 'Cannot re-assign \'self\''):
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 276, in __exit__
self._raiseFailure('"{}" does not match "{}"'.format(
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected_regex.pattern, str(exc_value)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 200, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: "Cannot re-assign 'self'" does not match "RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
"
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_reassign_module_lhs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu | true |
2,819,388,619 | DISABLED test_scriptable_fn_as_attr (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"module: flaky-tests",
"skipped"
] | 2 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_scriptable_fn_as_attr&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36368721106).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_scriptable_fn_as_attr`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 3423, in test_scriptable_fn_as_attr
self.checkModule(m, (inp, ))
~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/jit_utils.py", line 632, in checkModule
sm = torch.jit.script(nn_module)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
obj=obj,
...<3 lines>...
example_inputs=example_inputs,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 1150, in _script_impl
return torch.jit._recursive.create_script_module(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
obj, torch.jit._recursive.infer_methods_to_compile
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_recursive.py", line 555, in create_script_module
AttributeTypeIsSupportedChecker().check(nn_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_check.py", line 62, in check
source_lines = inspect.getsource(nn_module.__class__.__init__)
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1256, in getsource
lines, lnum = getsourcelines(object)
~~~~~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_scriptable_fn_as_attr
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu | true |
2,819,388,487 | DISABLED test_pybind_type_comparisons (__main__.TestScript) | pytorch-bot[bot] | closed | [
"oncall: jit",
"module: flaky-tests",
"skipped"
] | 3 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pybind_type_comparisons&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36368972765).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pybind_type_comparisons`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 3814, in test_pybind_type_comparisons
def f():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/_script.py", line 1439, in script
ret = _script_impl(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/_script.py", line 1209, in _script_impl
ast = get_jit_def(obj, obj.__name__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 1006, in getsourcelines
lines, lnum = findsource(object)
File "/opt/conda/envs/py_3.9/lib/python3.9/inspect.py", line 831, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 620, in <genexpr>
result = dict(
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.9/lib/python3.9/linecache.py", line 43, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_pybind_type_comparisons
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 @wdvr @chauhang @penguinwu | true |
2,819,302,997 | DataDependentOutputException with aten.equal.default and Dynamo export | pluflou | open | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: compile ux"
] | 3 | NONE | ### 🐛 Describe the bug
**Description of the problem:**
I am trying to export models whose source code I can't change and they call `torch.equal` in the model functions. I run into the error `DataDependentOutputException: aten.equal.default` and the error message says `Unsupported: data dependent operator: aten.equal.default; to enable, set torch._dynamo.config.capture_scalar_outputs = True` but setting that has no effect.
If this is not yet supported, is there a way around this for models that call `torch.equal` that doesn't require a rewrite?
**Simple way to reproduce:**
```python
class MyModule(torch.nn.Module):
def __init__(self, a):
super().__init__()
self.a = a
def forward(self, x):
if torch.equal(self.a, x):
return x * x
else:
return x
module = MyModule(torch.tensor([1]))
input = torch.tensor([1])
test = module(input)
graph, _ = torch._dynamo.export(module)(input)
```
**Error message:**
```
---------------------------------------------------------------------------
DataDependentOutputException Traceback (most recent call last)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py:2132](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py#line=2131), in run_node(tracer, node, args, kwargs, nnmodule)
2131 if op == "call_function":
-> 2132 return node.target(*args, **kwargs)
2133 elif op == "call_method":
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/utils/_stats.py:21](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/utils/_stats.py#line=20), in count.<locals>.wrapper(*args, **kwargs)
20 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1
---> 21 return fn(*args, **kwargs)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py:1238](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py#line=1237), in FakeTensorMode.__torch_dispatch__(self, func, types, args, kwargs)
1237 try:
-> 1238 return self.dispatch(func, types, args, kwargs)
1239 except TypeError:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py:1692](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py#line=1691), in FakeTensorMode.dispatch(self, func, types, args, kwargs)
1691 if self.cache_enabled:
-> 1692 return self._cached_dispatch_impl(func, types, args, kwargs)
1693 else:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py:1348](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py#line=1347), in FakeTensorMode._cached_dispatch_impl(self, func, types, args, kwargs)
1347 if output is _UNASSIGNED:
-> 1348 output = self._dispatch_impl(func, types, args, kwargs)
1350 return output
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py:1983](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py#line=1982), in FakeTensorMode._dispatch_impl(self, func, types, args, kwargs)
1982 if run_impl_check(func):
-> 1983 op_impl_out = op_impl(self, func, *args, **kwargs)
1984 if op_impl_out is not NotImplemented:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_impls.py:485](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_impls.py#line=484), in data_dep(fake_mode, func, *args, **kwargs)
483 @register_op_impl(lambda func: torch.Tag.data_dependent_output in func.tags)
484 def data_dep(fake_mode, func, *args, **kwargs):
--> 485 raise DataDependentOutputException(func)
DataDependentOutputException: aten.equal.default
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py:2017](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py#line=2016), in get_fake_value(node, tx, allow_non_graph_fake)
2016 with tx.fake_mode, enable_python_dispatcher():
-> 2017 ret_val = wrap_fake_exception(
2018 lambda: run_node(tx.output, node, args, kwargs, nnmodule)
2019 )
2020 except Unsupported:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py:1574](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py#line=1573), in wrap_fake_exception(fn)
1573 try:
-> 1574 return fn()
1575 except UnsupportedFakeTensorException as e:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py:2018](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py#line=2017), in get_fake_value.<locals>.<lambda>()
2016 with tx.fake_mode, enable_python_dispatcher():
2017 ret_val = wrap_fake_exception(
-> 2018 lambda: run_node(tx.output, node, args, kwargs, nnmodule)
2019 )
2020 except Unsupported:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py:2150](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py#line=2149), in run_node(tracer, node, args, kwargs, nnmodule)
2149 except Exception as e:
-> 2150 raise RuntimeError(make_error_message(e)).with_traceback(
2151 e.__traceback__
2152 ) from e
2154 raise AssertionError(op)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py:2132](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py#line=2131), in run_node(tracer, node, args, kwargs, nnmodule)
2131 if op == "call_function":
-> 2132 return node.target(*args, **kwargs)
2133 elif op == "call_method":
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/utils/_stats.py:21](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/utils/_stats.py#line=20), in count.<locals>.wrapper(*args, **kwargs)
20 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1
---> 21 return fn(*args, **kwargs)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py:1238](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py#line=1237), in FakeTensorMode.__torch_dispatch__(self, func, types, args, kwargs)
1237 try:
-> 1238 return self.dispatch(func, types, args, kwargs)
1239 except TypeError:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py:1692](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py#line=1691), in FakeTensorMode.dispatch(self, func, types, args, kwargs)
1691 if self.cache_enabled:
-> 1692 return self._cached_dispatch_impl(func, types, args, kwargs)
1693 else:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py:1348](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py#line=1347), in FakeTensorMode._cached_dispatch_impl(self, func, types, args, kwargs)
1347 if output is _UNASSIGNED:
-> 1348 output = self._dispatch_impl(func, types, args, kwargs)
1350 return output
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py:1983](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_tensor.py#line=1982), in FakeTensorMode._dispatch_impl(self, func, types, args, kwargs)
1982 if run_impl_check(func):
-> 1983 op_impl_out = op_impl(self, func, *args, **kwargs)
1984 if op_impl_out is not NotImplemented:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_impls.py:485](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_subclasses/fake_impls.py#line=484), in data_dep(fake_mode, func, *args, **kwargs)
483 @register_op_impl(lambda func: torch.Tag.data_dependent_output in func.tags)
484 def data_dep(fake_mode, func, *args, **kwargs):
--> 485 raise DataDependentOutputException(func)
RuntimeError: Failed running call_function <built-in method equal of type object at 0x10a4b2240>(*(FakeTensor(..., size=(1,), dtype=torch.int64), FakeTensor(..., size=(1,), dtype=torch.int64)), **{}):
aten.equal.default
During handling of the above exception, another exception occurred:
Unsupported Traceback (most recent call last)
Cell In[75], line 15
13 input = torch.tensor([1])
14 test = module(input)
---> 15 graph, _ = torch._dynamo.export(module)(input)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py:1432](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py#line=1431), in export.<locals>.inner(*args, **kwargs)
1430 # TODO(voz): We may have instances of `f` that mutate inputs, we should track sideeffects and reject.
1431 try:
-> 1432 result_traced = opt_f(*args, **kwargs)
1433 except ConstraintViolationError as e:
1434 constraint_violation_error = e
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/nn/modules/module.py:1736](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/nn/modules/module.py#line=1735), in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/nn/modules/module.py:1747](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/nn/modules/module.py#line=1746), in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py:465](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py#line=464), in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
460 saved_dynamic_layer_stack_depth = (
461 torch._C._functorch.get_dynamic_layer_stack_depth()
462 )
464 try:
--> 465 return fn(*args, **kwargs)
466 finally:
467 # Restore the dynamic layer stack depth if necessary.
468 torch._C._functorch.pop_dynamic_layer_stack_and_undo_to_depth(
469 saved_dynamic_layer_stack_depth
470 )
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/nn/modules/module.py:1736](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/nn/modules/module.py#line=1735), in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/nn/modules/module.py:1747](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/nn/modules/module.py#line=1746), in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:1269](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py#line=1268), in CatchErrorsWrapper.__call__(self, frame, cache_entry, frame_state)
1263 return hijacked_callback(
1264 frame, cache_entry, self.hooks, frame_state
1265 )
1267 with compile_lock, _disable_current_modes():
1268 # skip=1: skip this frame
-> 1269 return self._torchdynamo_orig_callable(
1270 frame, cache_entry, self.hooks, frame_state, skip=1
1271 )
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:526](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py#line=525), in ConvertFrameAssert.__call__(self, frame, cache_entry, hooks, frame_state, skip)
510 compile_id = CompileId(frame_id, frame_compile_id)
512 signpost_event(
513 "dynamo",
514 "_convert_frame_assert._compile",
(...)
523 },
524 )
--> 526 return _compile(
527 frame.f_code,
528 frame.f_globals,
529 frame.f_locals,
530 frame.f_builtins,
531 self._torchdynamo_orig_callable,
532 self._one_graph,
533 self._export,
534 self._export_constraints,
535 hooks,
536 cache_entry,
537 cache_size,
538 frame,
539 frame_state=frame_state,
540 compile_id=compile_id,
541 skip=skip + 1,
542 )
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:924](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py#line=923), in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_entry, cache_size, frame, frame_state, compile_id, skip)
922 guarded_code = None
923 try:
--> 924 guarded_code = compile_inner(code, one_graph, hooks, transform)
925 return guarded_code
926 except Exception as e:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:666](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py#line=665), in _compile.<locals>.compile_inner(code, one_graph, hooks, transform)
664 with dynamo_timed("_compile.compile_inner", phase_name="entire_frame_compile"):
665 with CompileTimeInstructionCounter.record():
--> 666 return _compile_inner(code, one_graph, hooks, transform)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_utils_internal.py:87](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_utils_internal.py#line=86), in compile_time_strobelight_meta.<locals>.compile_time_strobelight_meta_inner.<locals>.wrapper_function(*args, **kwargs)
84 kwargs["skip"] = kwargs["skip"] + 1
86 if not StrobelightCompileTimeProfiler.enabled:
---> 87 return function(*args, **kwargs)
89 return StrobelightCompileTimeProfiler.profile_compile_time(
90 function, phase_name, *args, **kwargs
91 )
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:699](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py#line=698), in _compile.<locals>._compile_inner(code, one_graph, hooks, transform)
697 CompileContext.get().attempt = attempt
698 try:
--> 699 out_code = transform_code_object(code, transform)
700 break
701 except exc.RestartAnalysis as e:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py:1322](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py#line=1321), in transform_code_object(code, transformations, safe)
1319 instructions = cleaned_instructions(code, safe)
1320 propagate_line_nums(instructions)
-> 1322 transformations(instructions, code_options)
1323 return clean_and_assemble_instructions(instructions, keys, code_options)[1]
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:219](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py#line=218), in preserve_global_state.<locals>._fn(*args, **kwargs)
215 exit_stack.enter_context(
216 torch.fx._symbolic_trace._maybe_revert_all_patches()
217 )
218 try:
--> 219 return fn(*args, **kwargs)
220 finally:
221 cleanup.close()
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py:634](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py#line=633), in _compile.<locals>.transform(instructions, code_options)
632 try:
633 with tracing(tracer.output.tracing_context), tracer.set_current_tx():
--> 634 tracer.run()
635 except exc.UnspecializeRestartAnalysis:
636 speculation_log.clear()
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:2796](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py#line=2795), in InstructionTranslator.run(self)
2795 def run(self):
-> 2796 super().run()
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:983](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py#line=982), in InstructionTranslatorBase.run(self)
981 try:
982 self.output.push_tx(self)
--> 983 while self.step():
984 pass
985 except BackendCompilerFailed:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:895](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py#line=894), in InstructionTranslatorBase.step(self)
892 self.update_block_stack(inst)
894 try:
--> 895 self.dispatch_table[inst.opcode](self, inst)
896 return not self.output.should_exit
897 except exc.ObservedException as e:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:582](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py#line=581), in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
580 return handle_graph_break(self, inst, speculation.reason)
581 try:
--> 582 return inner_fn(self, inst)
583 except Unsupported as excp:
584 if self.generic_context_manager_depth > 0:
585 # We don't support graph break under GenericContextWrappingVariable,
586 # If there is, we roll back to the checkpoint and fall back.
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:2279](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py#line=2278), in InstructionTranslatorBase.CALL(self, inst)
2277 @break_graph_if_unsupported(push=1)
2278 def CALL(self, inst):
-> 2279 self._call(inst)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:2273](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py#line=2272), in InstructionTranslatorBase._call(self, inst, call_kw)
2268 kwargs = {}
2270 try:
2271 # if call_function fails, need to set kw_names to None, otherwise
2272 # a subsequent call may have self.kw_names set to an old value
-> 2273 self.call_function(fn, args, kwargs)
2274 finally:
2275 self.kw_names = None
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py:830](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py#line=829), in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
828 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
829 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 830 self.push(fn.call_function(self, args, kwargs))
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/variables/torch.py:897](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/variables/torch.py#line=896), in TorchInGraphFunctionVariable.call_function(self, tx, args, kwargs)
888 if "out" in kwargs and isinstance(kwargs["out"], variables.TensorVariable):
889 # Calling fake tensor propagation can mutate the out= tensor in
890 # tx.output.tracked_fakes. tracked_fakes are used to apply
(...)
893 # guards. So save the shape now, and check later if it has
894 # changed. If it has, graph break.
895 fake_out_shape = kwargs["out"].proxy.node.meta["example_value"].shape
--> 897 tensor_variable = wrap_fx_proxy(
898 tx=tx,
899 proxy=tx.output.create_proxy(
900 "call_function",
901 fn_,
902 *proxy_args_kwargs(args, kwargs),
903 ),
904 )
906 if (
907 isinstance(tensor_variable, TensorVariable)
908 and "requires_grad" in kwargs
909 and kwargs["requires_grad"].as_python_constant()
910 ):
911 unimplemented(
912 """factory functions that return tensors that require grad are not supported.
913 Either create the tensor outside the compiled region, or do not set the tensor to require_grad"""
914 )
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py:2037](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py#line=2036), in wrap_fx_proxy(tx, proxy, example_value, subclass_type, **options)
2029 kwargs = {
2030 "tx": tx,
2031 "proxy": proxy,
(...)
2034 **options,
2035 }
2036 if subclass_type is None:
-> 2037 return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
2038 else:
2039 result = wrap_fx_proxy_cls(target_cls=TensorWithTFOverrideVariable, **kwargs)
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py:2124](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/variables/builder.py#line=2123), in wrap_fx_proxy_cls(target_cls, tx, proxy, example_value, subclass_type, **options)
2119 with torch._dynamo.utils._disable_saved_tensors_hooks_during_tracing():
2120 # with preserve_rng_state():
2121 if example_value is None:
2122 # only allow_non_graph_fake in this instance because we handle the non-fake
2123 # cases properly below.
-> 2124 example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
2126 # Handle recursive calls here
2127 elif maybe_get_fake_mode(example_value) is tx.fake_mode:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py:2030](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/utils.py#line=2029), in get_fake_value(node, tx, allow_non_graph_fake)
2025 cause = e.__cause__
2027 if isinstance(
2028 cause, torch._subclasses.fake_tensor.DataDependentOutputException
2029 ):
-> 2030 unimplemented(
2031 f"data dependent operator: {cause.func}; "
2032 "to enable, set torch._dynamo.config.capture_scalar_outputs = True"
2033 )
2034 elif isinstance(
2035 cause, torch._subclasses.fake_tensor.DynamicOutputShapeException
2036 ):
2037 if not torch._dynamo.config.capture_dynamic_output_shape_ops:
File [/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/exc.py:297](http://localhost:8888/opt/miniconda3/envs/test-pytorch-25/lib/python3.12/site-packages/torch/_dynamo/exc.py#line=296), in unimplemented(msg, from_exc, case_name)
295 if from_exc is not _NOTHING:
296 raise Unsupported(msg, case_name=case_name) from from_exc
--> 297 raise Unsupported(msg, case_name=case_name)
Unsupported: data dependent operator: aten.equal.default; to enable, set torch._dynamo.config.capture_scalar_outputs = True
from user code:
File "[/var/folders/zz/zvb8y0jx2sb5q3hn972ys2d5zzzdd3/T/ipykernel_51728/1596957745.py", line 7](http://localhost:8888/var/folders/zz/zvb8y0jx2sb5q3hn972ys2d5zzzdd3/T/ipykernel_51728/1596957745.py#line=6), in forward
if torch.equal(self.a, x):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.4.1 (x86_64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 21:00:12) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-14.4.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Versions of relevant libraries:
[pip3] botorch==0.12.0
[pip3] gpytorch==1.13
[pip3] numpy==2.0.0
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20250108
[pip3] torch==2.5.1
[conda] botorch 0.12.0 pyhd8ed1ab_1 conda-forge
[conda] gpytorch 1.13 pyh101cb37_1 conda-forge
[conda] libtorch 2.5.1 cpu_openblas_hf9ef3f7_1
[conda] nomkl 3.0 0
[conda] numpy 2.0.0 py312h255ab90_1
[conda] numpy-base 2.0.0 py312h12d8432_1
[conda] pytorch 2.5.1 cpu_openblas_py312hf01ac55_1
cc @chauhang @penguinwu @ezyang @bobrenjc93 | true |
2,819,299,382 | Test of triton.compile in worker processes | jamesjwu | closed | [
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145969
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,819,295,269 | jjwu triton compiler test | jamesjwu | closed | [
"module: inductor",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145968
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,819,283,480 | lintrunner fakily times out while running dmypy | malfet | closed | [
"module: typing",
"module: ci",
"module: lint",
"triaged",
"module: flaky-tests"
] | 5 | CONTRIBUTOR | ### 🐛 Describe the bug
Error normally looks like:
```
>>> General linter failure:
Error (MYPY) command-failed
Daemon is stuck; consider /opt/conda/envs/py_3.9/bin/dmypy kill
```
though there's been at least one variation with the message:
```
>>> General linter failure:
Error (MYPYSTRICT) command-failed
Response: {'restart': 'configuration changed', 'platform': 'linux',
'python_version': '3_11', 'roundtrip_time': 0.4517703056335449}
```
Re-run usually helps
See https://github.com/pytorch/pytorch/actions/runs/13037124925/job/36370212700 for example
### Versions
CI
cc @ezyang @xuzhao9 @gramster @seemethere @pytorch/pytorch-dev-infra @clee2000 @wdvr | true |
2,819,276,667 | [BE] Upgrade to mypy 1.14 | ZainRizvi | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: releng",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Upgrade mypy version
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,819,267,421 | dist.all_gather_object causes OOM when gathering string lists, attempting to allocate >1EB memory. wo informative error message | qsh-zh | closed | [
"oncall: distributed",
"triaged"
] | 6 | NONE | ### 🐛 Describe the bug
I've encountered an Out of Memory (OOM) error when using dist.all_gather_object to gather lists of strings across multiple tasks.
For example,
I encountered an intermittent error while using DCP to load checkpoints. While dcp.load() typically works well, it occasionally fails with the following error:
```shell
[rank112]: dcp.load(
[rank112]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/checkpoint/logger.py", line 83, in wrapper
[rank112]: result = func(*args, **kwargs)
[rank112]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/checkpoint/utils.py", line 429, in inner_func
[rank112]: return func(*args, **kwargs)
[rank112]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/checkpoint/state_dict_loader.py", line 154, in load
[rank112]: keys = _all_gather_keys(state_dict, process_group)
[rank112]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/checkpoint/utils.py", line 49, in _all_gather_keys
[rank112]: dist.all_gather_object(gathered_keys, keys, group=group)
[rank112]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 83, in wrapper
[rank112]: return func(*args, **kwargs)
[rank112]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 2728, in all_gather_object
[rank112]: input_tensor.resize_(max_object_size)
[rank112]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory.
```
it should be noted that
```shell
[rank112]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/checkpoint/utils.py", line 49, in _all_gather_keys
[rank112]: dist.all_gather_object(gathered_keys, keys, group=group)
```
only try to all gather a list of string. It is hard to believe it tries to allocate more than 1EB memory.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1055-aws-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5300.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] DISTS-pytorch==0.1
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cudnn-frontend==1.7.0
[pip3] nvidia-nccl-cu12==2.22.3
[pip3] nvidia-pytriton==0.5.13
[pip3] nvtx==0.2.10
[pip3] onnx==1.16.2
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxruntime==1.20.0
[pip3] optree==0.13.0
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-triton==3.0.0+dedb7bdf3
[pip3] slangtorch==1.3.1
[pip3] torch==2.5.0a0+e000cf0ad9.nv24.10
[pip3] torch_automated_profiler==1.10.0
[pip3] torch-fidelity==0.3.0
[pip3] torch-optimizer==0.3.0
[pip3] torch_tensorrt==2.5.0a0
[pip3] torchmetrics==1.6.0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.20.0a0
[pip3] tritonclient==2.51.0
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,819,229,371 | [PGNCCL] Simplify support macro definition | kwen2501 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145964
* #145893
- Promotes usage of `NCCL_VERSION_CODE >= NCCL_VERSION(X, Y, Z)`
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,819,190,247 | IRNode created from nonzero output does not contains unbacked symint | shunting314 | open | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
Issue exposed by: https://github.com/pytorch/pytorch/pull/145904
To repro:
```
TORCHINDUCTOR_SIZE_ASSERTS=1 python test/inductor/test_torchinductor.py -k test_nonzero_unbacked_refinement_cuda
```
Inductor generate fallback handler for nonzero. When running the op with FakeTensor, the size contains unbacked symint as expected. But when we create IRNode from it, the unbacked symint get lost. It's due to this line:
https://github.com/pytorch/pytorch/blob/6aed6c042e5a93c442874d4159687d3429bc7a24/torch/_inductor/ir.py#L5007
### Error logs
_No response_
### Versions
.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @aakhundov | true |
2,819,165,643 | Update fuzzer guidance to include rng | mlazos | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Add another condition to fuzzer issue guidance. | true |
2,819,165,547 | Don't use mypy daemon in CI | ZainRizvi | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | This is an attempt to fix flaky mypy errors in CI that look like:
```
dmypy status --verbose
connection_name : /var/folders/rf/qrn1jkgj0b9_tcznwp8ck46w0000gn/T/tmpjoqsid7_/dmypy.sock
pid : 32233
error : timed out
Daemon is stuck; consider /Users/zainr/pytorch/venv/bin/dmypy kill
```
"Fix" it by not using the daemon at all, since it doesn't actually provide any perf benefits in CI.
Fixes https://github.com/pytorch/pytorch/issues/145967 (hopefully) | true |
2,819,163,990 | Update to remind users to use torch.compile template | mlazos | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Users have been submitting fuzzer issues without meeting the requirements outline in the torch.compile issue template. This updates the note to remind users to use the torch.compile template for torch.compile bugs.
| true |
2,819,161,328 | [export] Sync model container types to schema.py | zhxchen17 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 7 | CONTRIBUTOR | Summary: Synced from D68840230
Test Plan: No behavior changes to existing API. Will be tested internally.
Differential Revision: D68846532
| true |
2,819,135,638 | [dynamo][builtin-skipfiles-cleanup] Remove inspect | anijain2305 | closed | [
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145804
* #145876
* __->__ #145958
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,819,110,540 | Fix invalid nested int guarding in broadcast_shapes() | jbschlosser | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145957
Fixes #145874
This PR takes the approach of updating the logic determining whether multiple shapes broadcast together to handle nested ints specially.
Possible alternative approach: don't update `broadcast_shapes()` + indicate that e.g. `Ne(j0, 1)` should statically evaluate to False. I briefly tried this but it wasn't straightforward. Is it better? | true |
2,819,104,221 | [export] Additionally save pytree namedtuple field names | angelayi | closed | [
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 6 | CONTRIBUTOR | If a user passes in a namedtuple as an input, currently the input TreeSpec looks like: `TreeSpec(type=namedtuple, context=”class_fqn”, children_spec=[*, *])`
The user then saves the program containing this input TreeSpec. But what happens if they load it in a new environment where `class_fqn` now contains an additional field?
This means that the exported program is now expected to take in another input. But since those fields were not used in the original program, users should be able just drop those additional fields and the program will run successfully. This is needed/used in APS where they use unflattener's adapter to adapt the inputs based on the previously saved treespecs.
There are a couple of [solutions](https://docs.google.com/document/d/1V4ZSdy-8PUISWc8RqvGu3DU01BVegJhHHPWqa1Io7Eg/edit?tab=t.0) for how we can address this, but eventually we settled on saving a side table mapping namedtuple types to their list of field names, which can then be accessed by the adapter. | true |
2,819,078,775 | Add MPS OpInfo db, rework test_mps to use OpInfo | skotapati | open | [
"triaged",
"open source",
"release notes: mps",
"ciflow/mps",
"keep-going"
] | 8 | COLLABORATOR | Infrastructure changes that will help enable: https://github.com/pytorch/pytorch/pull/142202 | true |
2,819,069,121 | add inductor_triton_kernel_mapping_post_grad.json to tlparseadd changes | yushangdi | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Landing D67612181 here. The original exported PR somehow fails OSS CI, but this one doesn't (though the PR content is the same).
Add debug trace artifact to inductor_triton_kernel_mapping_post_grad.json (debug artifact for provenance tracking) to tlparse.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,819,030,748 | Fix signif_strides_equal for symints, dedupe | eellison | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145448
* __->__ #145953
Previous impl would take a size hint, which was failing internally with a
```
strides1 = [V.graph.sizevars.size_hint(strides1[i]) for i in non_1_indices]
File "/dev/shm/uid-30083/6f57b5f9-seed-nspid4026541609_cgpid284393-ns-4026541967/torch/_inductor/sizevars.py", line 554, in size_hint
return int(out)
File "/dev/shm/uid-30083/6f57b5f9-seed-nspid4026541609_cgpid284393-ns-4026541967/sympy/core/expr.py", line 307, in __int__
raise TypeError("Cannot convert symbols to int")
```
There are unbacked tests in test_triton which should exercise this, as well as other tests for these functions when they were added.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,819,013,844 | [cutlass backend] matmul of FP32 would result in error: a value of type "float *" cannot be used to initialize an entity of type "const cutlass::tfloat32_t *" | henrylhtsang | open | [
"module: build",
"module: cuda",
"triaged"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
cross post: https://fb.workplace.com/groups/1037051091797916/posts/1038569208312771/
Just want to document this in case people ran into this error.
It seems like it is some problem with float vs tensor float 32. In https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/cuda/cuda_template.py#L231, float is mapped to float instead of cutlass::tfloat32_t. However, fixing that wouldn't solve the problem, since among A, B, C (nullptr), D, it is expecting A and B to have cutlass::tfloat32_t and D to have float, in https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/cuda/gemm_template.py#L105-L111
I tested disable tf32 etc, which helped a bit, but compilation would still fail.
I think this is a lower priority issue. Do let me know if this is important to you. Bonus point is if you know how to fix it.
error:
```
error: a value of type "float *" cannot be used to initialize an entity of type "const cutlass::tfloat32_t *"
torch/_inductor/select_algorithm.py:1783] [0/0] (float*)(X),
```
repro:
```
import logging
import os
os.environ["TORCH_LOGS"] = "+output_code,+benchmarking"
import torch
import torch._inductor.config
_CUTLASS_DIR = os.path.join(os.path.dirname(__file__), "../../third_party/cutlass/")
torch._inductor.config.max_autotune = True
torch._inductor.config.autotune_in_subproc = False
torch._inductor.config.max_autotune_gemm_backends = "CUTLASS"
torch._inductor.config.autotune_fallback_to_aten = False
torch._inductor.config.cuda.cutlass_max_profiling_configs = 2
# torch._inductor.config.cuda.cutlass_dir = _CUTLASS_DIR
class TestModel(torch.nn.Module):
def forward(self, A, B):
return A @ B
def main():
M = 1024
inputs = [torch.randn(M, M, device="cuda", dtype=torch.float32) for _ in range(2)]
model = TestModel().cuda()
compiled_model = torch.compile(model, fullgraph=True)
_ = compiled_model(*inputs)
print("done")
if __name__ == "__main__":
main()
```
### Versions
trunk
cc @malfet @seemethere @ptrblck @msaroufim @eqy | true |
2,818,967,070 | DISABLED test_profile_all_threads (__main__.TestProfiler) | pytorch-bot[bot] | closed | [
"module: flaky-tests",
"skipped",
"oncall: profiler"
] | 7 | NONE | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_profile_all_threads&suite=TestProfiler&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36354983797).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 9 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_profile_all_threads`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/profiler/test_profiler.py", line 2236, in test_profile_all_threads
verify_events(returned_events[0])
IndexError: list index out of range
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/profiler/test_profiler.py TestProfiler.test_profile_all_threads
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `profiler/test_profiler.py`
cc @clee2000 @wdvr @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | true |
2,818,946,716 | [linter] Grep linter batches long command | clee2000 | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | If the command is too long, the linter fails with
```
Failed due to OSError:
[Errno 7] Argument list too long: 'grep'
```
Fix this by batching the command so it is shorter
Limit of 750k was chosen due to `getconf ARG_MAX` returns ~1M on my mac. My guess is that most people shouldn't hit this unless they run --all-files and the directory length is long.
cc @wdvr | true |
2,818,908,381 | [CUDA][Blackwell] Blackwell Tracking Issue | eqy | open | [
"module: build",
"module: cuda",
"triaged"
] | 10 | COLLABORATOR | ### 🚀 The feature, motivation and pitch
Blackwell's CUDA toolkit has been released and we're working on rapidly upstream fixes/upgrades that are required to support Blackwell (e.g., SM 10.0, SM 12.0).
Build fixes (these are needed to prevent kernels from crashing or enable existing backend support):
-------------------------------------------------------------------------------------------------------------
- [x] enable compute capabilities in build https://github.com/pytorch/pytorch/pull/145436 https://github.com/pytorch/pytorch/pull/141724
- [x] gate sm90 specific kernels to sm90 for now https://github.com/pytorch/pytorch/pull/145728
- [x] limit number of threads in avgpool_2d backward to prevent crash on launch https://github.com/pytorch/pytorch/pull/145669
- [x] SDPA kernel SM gating https://github.com/pytorch/pytorch/pull/145602
- [x] CUDA 12.8 upgrade incl. CI https://github.com/pytorch/pytorch/pull/145567
Library upgrades (these are needed to enable Blackwell support on math libraries):
-------------------------------------------------------------------------------------------
- [x] cuDNN upgrade to 9.7.0+
- [x] cuBLAS upgrade (will implicitly happen with upgrade to CUDA 12.8+)
- [x] NCCL upgrade to 2.25.1 https://github.com/pytorch/pytorch/pull/145776
- [x] CUTLASS upgrade to 3.8.0 https://github.com/pytorch/pytorch/pull/145741
- [x] Triton upgrade to main/old pin w/ Blackwell support https://github.com/pytorch/pytorch/issues/146518 CC @drisspg
Performance upgrades (existing kernels w/ improved implementation on Blackwell):
--------------------------------------------------------------------------------------------
- [x] 128-bit vectorization https://github.com/pytorch/pytorch/pull/145746
cc @malfet @seemethere @ptrblck @msaroufim | true |
2,818,908,127 | give emulate_precision_casts an envar | eellison | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145948
this was requested internally
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,818,904,506 | flops counter error with PyTorch2.5 and 2.6 | jiagaoxiang | open | [
"oncall: distributed",
"triaged",
"module: flop counter"
] | 12 | NONE | ### 🐛 Describe the bug
I am fine-tuning llama3.2-11b-instruct model with llama-cookbook repo on H100. The total flops count with pytorch2.5 and 2.6 is ~186TFlops; but the total flops count with PyTorch nightly (2.7) is ~2900TFlops. (Batch size is 2) Could you please take a look at if the flops counting here is correct? It feels both numbers are not correct as if the total TFlops is 2900TFlops, the H100 TFlops/s/GPU will reach >700 which is absurdly high.....
Below are the reproduce steps:
```
git clone https://github.com/meta-llama/llama-cookbook.git
docker run --name llama-cookbook --shm-size=64g --gpus all -it --rm -v /home/dougljia:/home/dougljia -e HF_HOME=/home/dougljia/model nvcr.io/nvidia/pytorch:24.12-py3
cd /home/dougljia/llama-cookbook
pip install -U pip setuptools
pip install -e .
pip install huggingface_hub transformers fire
huggingface-cli login --token <replace with your token>
torchrun --nnodes 1 --nproc_per_node 8 getting-started/finetuning/finetuning.py --enable_fsdp --lr 1e-6 --num_epochs 1 --batch_size_training 2 \
--model_name meta-llama/Llama-3.2-11B-Vision-Instruct --dist_checkpoint_root_folder ./finetuned_model --dist_checkpoint_folder fine-tuned --use_fast_kernels --dataset "custom_dataset" \
--custom_dataset.test_split "test" --custom_dataset.file "getting-started/finetuning/datasets/ocrvqa_dataset.py" --run_validation False --save_model False --batching_strategy padding \
--flop_counter --flop_counter_start 10 --max_train_step 15 --fsdp_activation_checkpointing
```
The output will be:
```
# Module FLOP % Total
# ----------------------------------------------------- -------- ---------
# FullyShardedDataParallel 189.021T 100.00%
# - aten.convolution 0.019T 0.01%
# - aten.bmm 0.000T 0.00%
# - aten.mm 83.731T 44.30%
# - aten._scaled_dot_product_cudnn_attention 34.430T 18.21%
# - aten.addmm 27.784T 14.70%
# - aten._scaled_dot_product_cudnn_attention_backward 43.037T 22.77%
# - aten.convolution_backward 0.019T 0.01%
# FullyShardedDataParallel._fsdp_wrapped_module 189.021T 100.00%
# - aten.convolution 0.019T 0.01%
# - aten.bmm 0.000T 0.00%
# - aten.mm 83.731T 44.30%
# - aten._scaled_dot_product_cudnn_attention 34.430T 18.21%
# - aten.addmm 27.784T 14.70%
# - aten._scaled_dot_product_cudnn_attention_backward 43.037T 22.77%
# - aten.convolution_backward 0.019T 0.01%
# Training Epoch: 1/1, step 14/112 completed (loss: 0.4789400100708008): 13%|███▎ | 15/112 [01:41<10:56, 6.77s/it]
# Training Epoch: 1/1, step 14/112 completed (loss: 0.3038587272167206): 13%|███▎ | 15/112 [01:40<10:52, 6.73s/it]
# Training Epoch: 1/1, step 14/112 completed (loss: 0.7101249694824219): 13%|███▎ | 15/112 [01:40<10:48, 6.69s/it]
# Max CUDA memory allocated was 69 GB
# Max CUDA memory reserved was 77 GB
# Peak active CUDA memory was 69 GB
# CUDA Malloc retries : 0
# CPU Total Peak Memory consumed during the train (max): 3 GB
# Epoch 1: train_perplexity=1.0954, train_epoch_loss=0.0911, epoch time 103.91605499701109s
# training params are saved in /home/dougljia/llama-cookbook/finetuned_model/fine-tuned-meta-llama/Llama-3.2-11B-Vision-Instruct/train_params.yaml
# Key: avg_train_prep, Value: 1.095421552658081
# Key: avg_train_loss, Value: 0.0911393016576767
# Key: avg_epoch_time, Value: 103.91605499701109
# Key: avg_checkpoint_time, Value: 4.4002081267535686e-07
# Key: model_tflops, Value: 31.987115589590758
```
If you install the nightly pytorch in this docker image:
```
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126
cd /home/dougljia/llama-cookbook
pip install -U pip setuptools
pip install -e .
pip install huggingface_hub transformers fire
huggingface-cli login --token <replace with your token>
torchrun --nnodes 1 --nproc_per_node 8 getting-started/finetuning/finetuning.py --enable_fsdp --lr 1e-6 --num_epochs 1 --batch_size_training 2 \
--model_name meta-llama/Llama-3.2-11B-Vision-Instruct --dist_checkpoint_root_folder ./finetuned_model --dist_checkpoint_folder fine-tuned --use_fast_kernels --dataset "custom_dataset" \
--custom_dataset.test_split "test" --custom_dataset.file "getting-started/finetuning/datasets/ocrvqa_dataset.py" --run_validation False --save_model False --batching_strategy padding \
--flop_counter --flop_counter_start 10 --max_train_step 15 --fsdp_activation_checkpointing
```
The output will be:
```
# Training Epoch: 1/1, step 14/112 completed (loss: 0.39315265417099): 13%|███▌ | 15/112 [01:02<06:44, 4.17s/it]
# Module FLOP % Total
# --------------------------------------------------------- --------- ---------
# FullyShardedDataParallel 2855.461T 100.00%
# - aten.convolution 0.289T 0.01%
# - aten.bmm 0.002T 0.00%
# - aten.mm 1274.943T 44.65%
# - aten._scaled_dot_product_efficient_attention 516.971T 18.10%
# - aten.addmm 416.754T 14.59%
# - aten._scaled_dot_product_efficient_attention_backward 646.214T 22.63%
# - aten.convolution_backward 0.289T 0.01%
# FullyShardedDataParallel._fsdp_wrapped_module 2855.461T 100.00%
# - aten.convolution 0.289T 0.01%
# - aten.bmm 0.002T 0.00%
# - aten.mm 1274.943T 44.65%
# - aten._scaled_dot_product_efficient_attention 516.971T 18.10%
# - aten.addmm 416.754T 14.59%
# - aten._scaled_dot_product_efficient_attention_backward 646.214T 22.63%
# - aten.convolution_backward 0.289T 0.01%
# Training Epoch: 1/1, step 14/112 completed (loss: 1.0819196701049805): 13%|███▎ | 15/112 [01:01<06:37, 4.10s/it]
# Training Epoch: 1/1, step 14/112 completed (loss: 0.5718942880630493): 13%|███▎ | 15/112 [01:02<06:45, 4.18s/it]
# Training Epoch: 1/1, step 14/112 completed (loss: 0.7083172798156738): 13%|███▎ | 15/112 [01:02<06:42, 4.15s/it]
# Training Epoch: 1/1, step 14/112 completed (loss: 0.29959213733673096): 13%|███▏ | 15/112 [01:01<06:35, 4.08s/it]
# Max CUDA memory allocated was 69 GB
# Max CUDA memory reserved was 75 GB
# Peak active CUDA memory was 69 GB
# CUDA Malloc retries : 2
# CPU Total Peak Memory consumed during the train (max): 3 GB
# Epoch 1: train_perplexity=1.0953, train_epoch_loss=0.0910, epoch time 63.54467297301744s
# training params are saved in /home/dougljia/llama-cookbook/finetuned_model/fine-tuned-meta-llama/Llama-3.2-11B-Vision-Instruct/train_params.yaml
# Key: avg_train_prep, Value: 1.0953115224838257
# Key: avg_train_loss, Value: 0.09103880822658539
# Key: avg_epoch_time, Value: 63.54467297301744
# Key: avg_checkpoint_time, Value: 1.8998980522155762e-07
# Key: model_tflops, Value: 725.586666664745
```
Additionally, with the flops counter on, each iteration takes about 4.1s; but without the counter, each step only takes about 1.8s, does this indicate the actual TFlops/s/GPU is more than 1400? (which is not possible...)
Could you please take a look? Thank you!
### Versions
--2025-01-29 10:11:12-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24353 (24K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[================================================================>] 23.78K --.-KB/s in 0.001s
2025-01-29 10:11:12 (38.1 MB/s) - ‘collect_env.py’ saved [24353/24353]
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4800.17
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | true |
2,818,894,954 | [ROCm][TunableOp] hipblaslt tf32 support | jeffdaily | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 8 | COLLABORATOR | TF32 is supported by hipblaslt. Support added by #143549. This PR expands integration to the TunableOp feature.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
2,818,891,904 | Make regex error catching compatible with Python 3.12+. | haifeng-jin | closed | [
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | COLLABORATOR | In Python 3.12, the error message has changed from "Can't pickle local object" to "Can't get local object".
The old regex would no longer catch the error.
This PR make it compatible with Python 3.12 and backward compatible as well.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,818,871,557 | Add align_to_window option to onnx version of torch.stft | jackzhxng | open | [
"module: onnx",
"triaged",
"module: fft"
] | 0 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
Implement the align_to_window parameter to the [onnx version of torch.stft](https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset17.py#L102).This parameter was added to torch.stft in https://github.com/pytorch/pytorch/pull/145324.
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry | true |
2,818,866,459 | Add center and padding options to onnx version of torch.stft | jackzhxng | open | [
"module: onnx",
"triaged",
"module: fft"
] | 0 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
Implement center and padding functionality to the [onnx version of torch.stft](https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset17.py#L102). Onnx's [stft](https://onnx.ai/onnx/operators/onnx__STFT.html) doesn't have centering options, so the input signal needs to be manually centered and padded ahead of the function call.
### Alternatives
_No response_
### Additional context
_No response_
cc @mruberry | true |
2,818,804,529 | Enable fast qlinear_dynamic path for AArch64 through ACL directly | fadara01 | closed | [
"module: cpu",
"triaged",
"open source",
"module: arm",
"release notes: quantization",
"release notes: releng",
"ciflow/linux-aarch64",
"arm priority"
] | 21 | COLLABORATOR | This enables a fast path for eager mode dynamic quantization for AArch64 through Arm Compute Library (ACL) directly.
Context: PR #126687 enabled an optimized implementation for `qlinear_dynamic` for AArch64 through ideep → oneDNN → ACL which improved performance by ~10x compared to the previous implementation.
However, the current `qlinear_dynamic` path (ideep → oneDNN → ACL) suffers from high overhead due to the API friction between the stateless oneDNN API and the stateful ACL low-precision GEMM (`lowp_gemm`) API - for example, ACL's `lowp_gemm` objects cache information like weights reduction or weights in optimized memory format which oneDNN does not allow due to its stateless nature. Hence, ACL currently runs a (redundant) sum of columns and pre-transposition (to the gemm kernel's optimal format) for each GEMM operation.
This PR addresses the sub-optimalities above by integrating ACL directly with `qlinear_dynamic`. This approach yields an average speedup (averaged over context_lengths of 2^3 up to 2^9) of ~ 50% for `bert-base-uncased`, `bert-large-uncased`, `roberta-base`, `distilbert-base-uncased` with 16 threads on a Neoverse-V1 (with `transformers==4.48`) - See benchmark code below. To achieve this, we:
* Use ACL which is already built with PyTorch as a shared library when `USE_MKLDNN_ACL` is set.
* Add ACL to ATen's CPU include and dependency libs
* Introduce `PackedLinearWeightsACL` (as a subclasses of `PackedLinearWeightsOnednn`) with an implementation of `qlinear_dynamic` that uses ACL directly, while `qlinear` still follows the oneDNN path.
* A future PR will introduce a direct ACL implementation `qlinear` and will allow us to remove the dependence on `PackedLinearWeightsOnednn`
The following code was used to benchmark `qlinear_dynamic` performance:
```
# SPDX-FileCopyrightText: Copyright 2025 Arm Limited and/or its affiliate <open-source-office@arm.com>
# SPDX-License-Identifier: BSD-3-Clause
import torch
from transformers import AutoModel, AutoConfig
import time
import numpy as np
from argparse import ArgumentParser
class ModelArgumentParser(ArgumentParser):
def __init__(self) -> None:
super().__init__(description="huggingface model")
self.add_argument("--context_length",
help="context length - number of input tokens",
type=int,
default=64
)
self.add_argument("--model",
help="model checkpoint - i.e. 'bert-base-uncased'",
type=str,
default=None)
self.add_argument("--iters",
help="benchmark iterations",
default=500)
if __name__ == "__main__":
parser = ModelArgumentParser()
args = parser.parse_args()
model_name = args.model
config = AutoConfig.from_pretrained(model_name)
batch_size = 1
model = AutoModel.from_pretrained(model_name)
model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
model.eval()
inputs = torch.randint(config.vocab_size, (batch_size, args.context_length), dtype=torch.long, device="cpu")
times = []
with torch.no_grad():
# warmup
for _ in range(10):
model(inputs)
# benchmark
for _ in range(args.iters):
s = time.time_ns()
model(inputs)
times.append((time.time_ns() - s) / 1e6)
print("Model = ", model_name)
print("Context Length = ", args.context_length)
print("Min (ms) = ", min(times))
print("Mean (ms) = ", np.mean(times))
```
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01 | true |
2,818,775,094 | Resolve affine quantization namespace collision with torchao | andrewor14 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: AO frontend"
] | 7 | CONTRIBUTOR | Summary:
https://github.com/pytorch/pytorch/pull/141421
duplicated affine quantization custom ops from torchao into
the PT2E quantization flow, but these ops are registered under
the same namespace with the same name, causing "Duplicate
registration" errors for the new ops for use cases that import
from both repos. This commit fixes this by moving the PT2E
versions of the ops to a new namespace. In the long term,
we expect to migrate PT2E into torchao so users can migrate
back to the old namespace if they wish to.
Test Plan: python test/test_quantization.py -k test_channel_group_quantization
Differential Revision: D68838437
| true |
2,818,736,736 | [not ready for review] add structured logs for shape env mutations over time | bobrenjc93 | closed | [
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145940
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D68902581](https://our.internmc.facebook.com/intern/diff/D68902581) | true |
2,818,717,004 | Require that all HOPs be imported at `import torch` time | zou3519 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145939
* #145938
E.g. torch.ops.higher_order.cond does not exist until it is imported,
which is bad if it shows up in an FX graph or is used in some code
somewhere.
This PR also makes some more HOPs get imported at `import torch` time.
Test Plan:
- new tests | true |
2,818,716,874 | Better hop_db comment; move test to a non-export test file | zou3519 | closed | [
"Merged",
"topic: not user facing"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #145939
* __->__ #145938
Goal is for people to better test their HOPs.
Test Plan:
- tests | true |
2,818,685,745 | Disable AOTAutogradCache for triton version < 3.2 | jamesjwu | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145937
| true |
2,818,666,669 | `torch.tensordot`: performance improvements when contracting to a scalar. | nikitaved | open | [
"oncall: distributed",
"open source",
"ciflow/trunk",
"release notes: python_frontend",
"topic: performance",
"ciflow/inductor"
] | 24 | COLLABORATOR | As per title.
Fixes https://github.com/pytorch/pytorch/issues/145731
Touches only compute. The CPU overhead can potentially be further reduced.
Before:
```python
In [3]: n = 512
In [4]: A = torch.rand(n, n)
In [5]: B = torch.rand(n, n)
In [6]: %timeit torch.tensordot(A, B, [[0, 1], [0, 1]])
2.04 ms ± 70 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [7]: %timeit torch.tensordot(A, B, [[0, 1], [1, 0]])
2.85 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [8]: %timeit torch.tensordot(A, B, [[1, 0], [0, 1]])
2.9 ms ± 133 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [9]: %timeit torch.tensordot(A, B, [[1, 0], [1, 0]])
4.07 ms ± 262 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
After
```python
In [2]: n = 512
In [3]: A = torch.rand(n, n)
In [4]: B = torch.rand(n, n)
In [5]: %timeit torch.tensordot(A, B, [[0, 1], [0, 1]])
30.7 µs ± 2.51 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [6]: %timeit torch.tensordot(A, B, [[0, 1], [1, 0]])
141 µs ± 6.52 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [7]: %timeit torch.tensordot(A, B, [[1, 0], [0, 1]])
142 µs ± 4.03 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [8]: %timeit torch.tensordot(A, B, [[1, 0], [1, 0]])
62.8 µs ± 4.31 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,818,585,106 | [CPU Stream] Add noop for CPU stream record_event() and wait_event() | jvandebon | closed | [
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 14 | CONTRIBUTOR | Summary: Adds wait_event and record_event endpoints to CPU stream in order to facilitate device-agnostic code. Both methods are noops.
Test Plan: CI
Differential Revision: D68833927
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,818,558,830 | [Break XPU] Fix Inductor cuda bias UT | guangyey | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 7 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145934
# Motivation
[Break XPU] inductor ut: `inductor/test_inplace_padding.py::InplacePaddingTest::test_pad_non_zero - RuntimeError: Expected to find "empty_strided_cuda((2048, 2048), (2048, 1), torch.float32).as_strided((2048, 2047), (2048, 1))" but did not find it`
With this PR, `test_pad_non_zero` will pass on XPU.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,818,554,953 | Better hop_db comment; move test to a non-export test file | zou3519 | closed | [
"ciflow/trunk"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #145933
Goal is for people to better test their HOPs.
Test Plan:
- tests | true |
2,818,390,230 | cpp_wrapper: Move #includes to per-device header files | desertfire | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: releng",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 7 | CONTRIBUTOR | Summary:
This prepares us for the next PR in the stack, where we introduce pre-compiled per-device header files to save compilation time.
Reland https://github.com/pytorch/pytorch/pull/143909 after merge conflicts.
Co-authored-by: Benjamin Glass <[bglass@quansight.com](mailto:bglass@quansight.com)>
Differential Revision: D68656960
Pulled By: benjaminglass1
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov | true |
2,818,379,022 | [no ci] Edits on release notes | albanD | closed | [
"topic: not user facing"
] | 2 | COLLABORATOR | Easier to review per-commit.
Also the developpers section will need to be removed before posting contrary to the currently staged version.
| true |
2,818,368,850 | [Test][Linalg][CUDA] Increase niter in test_svd_lowrank_cuda_float64 | Aidyn-A | closed | [
"module: cuda",
"triaged",
"open source",
"module: linear algebra",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9 | COLLABORATOR | A recent PR #143049 attempted to increase tolerances to make test passable. However, we are still seeing errors like:
```
Traceback (most recent call last):
File "~git/pytorch/test/test_linalg.py", line 2540, in test_svd_lowrank
run_subtest(None, size, (), device, torch.svd_lowrank, density=density)
File "~git/pytorch/test/test_linalg.py", line 2505, in run_subtest
self.assertEqual(A, a, rtol=1e-7, atol=2e-7)
File "~git/pytorch/torch/testing/_internal/common_utils.py", line 4044, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 90 / 1000000 (0.0%)
Greatest absolute difference: 7.795904016052784e-07 at index (176, 930) (up to 2e-07 allowed)
Greatest relative difference: inf at index (6, 179) (up to 1e-07 allowed)
```
Increasing `niter` parameter actually decreases numerical differences.
cc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | true |
2,818,333,424 | torch.bucketize works incorrectly on uint input with negative boundaries after torch.compile-gpu | meetmul | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | ### 🐛 Describe the bug
Here is the code:
```python
import torch
input = torch.tensor([2,5], dtype=torch.uint8).to('cuda')
boundaries = torch.tensor([-10, -7, -7, -3], dtype=torch.int8).to('cuda')
compiled = torch.compile(torch.bucketize)
print(f"compiled: {compiled(input,boundaries)}")
print(f"expected: {torch.bucketize(input,boundaries)}")
```
Output:
```
compiled: tensor([0, 0], device='cuda:0')
expected: tensor([4, 4], device='cuda:0')
```
Here is the detailed triggering condition:
1. input's dtype is uint8, boundaries's dtype is int8.
2. boundaries parameter has negative values, input parameter has positive values.
3. the API is executed under cuda.
I guess there might be some implicit type casting when receiving such `uint8-int8` combination when running torch.compile on cuda.
### Error logs
```
compiled: tensor([0, 0], device='cuda:0')
expected: tensor([4, 4], device='cuda:0')
```
### Versions
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @aakhundov | true |
2,818,126,681 | DISABLED test_missing_getstate (__main__.TestScript) | pytorch-bot[bot] | closed | [
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_missing_getstate&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36340322722).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_missing_getstate`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 9226, in test_missing_getstate
with self.assertRaisesRegex(RuntimeError, "getstate"):
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 276, in __exit__
self._raiseFailure('"{}" does not match "{}"'.format(
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
expected_regex.pattern, str(exc_value)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 200, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: "getstate" does not match "RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
"
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_missing_getstate
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_jit.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
2,818,126,420 | DISABLED test_nn_LSTM_with_layers (__main__.TestScript) | pytorch-bot[bot] | closed | [
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1 | NONE | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nn_LSTM_with_layers&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36340322722).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nn_LSTM_with_layers`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 14904, in test_nn_LSTM_with_layers
class M(torch.jit.ScriptModule):
...<6 lines>...
return self.rnn(x, (h0, c0))[0]
File "/var/lib/jenkins/workspace/test/test_jit.py", line 14909, in M
@torch.jit.script_method
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_script.py", line 365, in script_method
ast = get_jit_def(fn, fn.__name__, self_name="ScriptModule")
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/frontend.py", line 341, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
~~~~~~~~~^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 121, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
~~~~~~~~~~~~~~~~~~~~~~~~~^
fn, ErrorReport.call_stack()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_sources.py", line 24, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1238, in getsourcelines
lines, lnum = findsource(object)
~~~~~~~~~~^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/inspect.py", line 1074, in findsource
lines = linecache.getlines(file, module.__dict__)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1184, in __call__
result = self._inner_convert(
frame, cache_entry, hooks, frame_state, skip=skip + 1
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1048, in _compile
raise InternalTorchDynamoError(
f"{type(e).__qualname__}: {str(e)}"
).with_traceback(e.__traceback__) from None
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1078, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 988, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3087, in RETURN_VALUE
self._return(inst)
~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3062, in _return
and not self.symbolic_locals_contain_module_class()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3051, in symbolic_locals_contain_module_class
if isinstance(v, UserDefinedClassVariable) and issubclass(
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 191, in __instancecheck__
instance = instance.realize()
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 67, in realize
self._cache.realize()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 33, in realize
self.vt = VariableTracker.build(tx, self.value, source)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py", line 454, in build
return builder.VariableBuilder(tx, source)(value)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 385, in __call__
vt = self._wrap(value)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 620, in _wrap
result = dict(
build_key_value(i, k, v)
for i, (k, v) in enumerate(get_items_from_dict(value))
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/builder.py", line 622, in <genexpr>
for i, (k, v) in enumerate(get_items_from_dict(value))
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.InternalTorchDynamoError: RuntimeError: dictionary changed size during iteration
from user code:
File "/opt/conda/envs/py_3.13/lib/python3.13/linecache.py", line 38, in getlines
return cache[filename][2]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_jit.py TestScript.test_nn_LSTM_with_layers
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_jit.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.