id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,883,150,806
|
[BE/metal] Rename REGISTER_I0_I1 to REGISTER_SPECIAL.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps"
] | 3
|
MEMBER
|
Now that it's used for other ops as well.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,883,138,730
|
DISABLED test_slice (__main__.AutoFunctionalizeTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: functionalization",
"oncall: pt2",
"module: pt2-dispatcher"
] | 6
|
NONE
|
Platforms: mac, macos, rocm, asan, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_slice&suite=AutoFunctionalizeTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37886249275).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_slice`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_auto_functionalize.py`
cc @clee2000 @bdhirsh @ezyang @chauhang @penguinwu @zou3519
| true
|
2,883,138,727
|
DISABLED test_nonstrict_trace_no_action_at_a_distance (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 4
|
NONE
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nonstrict_trace_no_action_at_a_distance&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37882128198).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nonstrict_trace_no_action_at_a_distance`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/dynamo/test_decorators.py", line 558, in test_nonstrict_trace_no_action_at_a_distance
self.assertEqual(cnts.frame_count, 2)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4096, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 2 but got 1.
Absolute difference: 1
Relative difference: 0.5
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_nonstrict_trace_no_action_at_a_distance
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,883,137,370
|
DISABLED test_nonstrict_trace_nested_custom_class (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: linux, mac, macos, rocm, asan, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nonstrict_trace_nested_custom_class&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37886249966).
Over the past 3 hours, it has been determined flaky in 12 workflow(s) with 12 failures and 12 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nonstrict_trace_nested_custom_class`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1927, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 201, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/user_defined.py", line 664, in call_function
return tx.inline_user_function_return(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1812, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2025, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 897, in call_function
return var.call_function(tx, call_args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/torch.py", line 1006, in call_function
out_vt = variables.UserFunctionVariable(tree_flatten).call_function(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1812, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1927, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1812, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1927, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1812, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1927, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1812, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1927, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1812, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1927, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1812, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1927, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1812, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2089, in LOAD_ATTR
self._load_attr(inst)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2079, in _load_attr
result = BuiltinVariable(getattr).call_function(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 1108, in call_function
return handler(tx, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 945, in builtin_dispatch
rv = fn(tx, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 850, in call_self_handler
result = self_handler(tx, *args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/builtin.py", line 1864, in call_getattr
return obj.var_getattr(tx, name)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/variables/user_defined.py", line 1304, in var_getattr
raise_observed_exception(AttributeError, tx)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/exc.py", line 368, in raise_observed_exception
raise observed_exception_map[exc_type]
torch._dynamo.exc.ObservedAttributeError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 441, in test_nonstrict_trace_nested_custom_class
res = opt_fn(x, y)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1417, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 594, in __call__
return _compile(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1047, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 755, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 791, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1418, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 256, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 709, in transform
tracer.run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 3234, in run
super().run()
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1811, in exception_handler
raise Unsupported("Observed exception")
torch._dynamo.exc.Unsupported: Observed exception
from user code:
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 431, in fn
p = Point(x, y)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_nonstrict_trace_nested_custom_class
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,883,137,368
|
DISABLED test_nonstrict_trace_newly_constructed_custom_class_with_side_effects (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: linux, mac, macos, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nonstrict_trace_newly_constructed_custom_class_with_side_effects&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37886249966).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 7 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nonstrict_trace_newly_constructed_custom_class_with_side_effects`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2682, in CALL
self._call(inst)
~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2676, in _call
self.call_function(fn, args, kwargs)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/lazy.py", line 201, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/user_defined.py", line 664, in call_function
return tx.inline_user_function_return(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
VariableTracker.build(
^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
kwargs,
^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
~~~~~~~~~~~~~~~~~~~~~~^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2025, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/functions.py", line 897, in call_function
return var.call_function(tx, call_args, kwargs)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/torch.py", line 1006, in call_function
out_vt = variables.UserFunctionVariable(tree_flatten).call_function(
tx, [packed_input_vt], {}
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/functions.py", line 405, in call_function
return super().call_function(tx, args, kwargs)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
~~~~~~~~~~~~~~~~~~~~~~^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 769, in wrapper
return inner_fn(self, inst)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2682, in CALL
self._call(inst)
~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 2676, in _call
self.call_function(fn, args, kwargs)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1017, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/functions.py", line 186, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1034, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3455, in inline_call
return tracer.inline_call_()
~~~~~~~~~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3634, in inline_call_
self.run()
~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
~~~~~~~~~~~~~~~~~~~~~~^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in exception_handler
raise raised_exception
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1093, in step
self.dispatch_table[inst.opcode](self, inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1675, in RERAISE
self._raise_exception_variable(inst)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1620, in _raise_exception_variable
raise observed_exception_type(f"raised exception {val}")
torch._dynamo.exc.ObservedAttributeError: raised exception ExceptionVariable(<class 'AttributeError'>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 389, in test_nonstrict_trace_newly_constructed_custom_class_with_side_effects
res = opt_fn(x, y)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1417, in __call__
return self._torchdynamo_orig_callable(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
frame, cache_entry, self.hooks, frame_state, skip=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 594, in __call__
return _compile(
frame.f_code,
...<14 lines>...
skip=skip + 1,
)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 1047, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 755, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 791, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/bytecode_transformation.py", line 1418, in transform_code_object
transformations(instructions, code_options)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 256, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/convert_frame.py", line 709, in transform
tracer.run()
~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 3234, in run
super().run()
~~~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1183, in run
while self.step():
~~~~~~~~~^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1098, in step
self.exception_handler(e)
~~~~~~~~~~~~~~~~~~~~~~^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/symbolic_convert.py", line 1737, in exception_handler
raise Unsupported("Observed exception")
torch._dynamo.exc.Unsupported: Observed exception
from user code:
File "/var/lib/jenkins/workspace/test/dynamo/test_decorators.py", line 379, in fn
p = Point(x, y)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_nonstrict_trace_newly_constructed_custom_class_with_side_effects
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,883,137,136
|
DISABLED test_nonstrict_trace_nested_custom_class_error (__main__.DecoratorTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: asan, linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nonstrict_trace_nested_custom_class_error&suite=DecoratorTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37883389091).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nonstrict_trace_nested_custom_class_error`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
Truncated for length
```
line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1013, in helper
children, context = flatten_fn(node)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1020, in tree_flatten
treespec = helper(tree, leaves)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in helper
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1016, in <listcomp>
subspecs = [helper(child, leaves) for child in children]
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 1007, in helper
if _is_leaf(node, is_leaf=is_leaf):
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 802, in _is_leaf
return (is_leaf is not None and is_leaf(tree)) or _get_node_type(
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 795, in _get_node_type
if _is_namedtuple_instance(tree):
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/utils/_pytree.py", line 786, in _is_namedtuple_instance
if len(bases) != 1 or bases[0] != tuple:
File "/Users/ec2-user/runner/_work/_temp/conda_environment_13552406917/lib/python3.9/site-packages/torch/_dynamo/polyfills/__init__.py", line 242, in cmp_ne
if isinstance(type(a).__ne__, types.FunctionType):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
To execute this test, run the following from the base repo dir:
python test/dynamo/test_decorators.py DecoratorTests.test_nonstrict_trace_nested_custom_class_error
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_decorators.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,883,100,591
|
Remove HuggingFace reader and writer from __init__.py
|
ankitageorge
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary: This is causing a HFStorageReader/Writer to be imported which imports fsspec but dependencies don't have fsspec, which is causing failing builds
Differential Revision: D70286926
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,883,099,832
|
abs and arange Meta impls are non-standard and thus not working with other backends
|
albanD
|
open
|
[
"triaged",
"module: dispatch"
] | 0
|
COLLABORATOR
|
As reported when working on the tinygrad backend.
From a quick look at abs, it is CompositeExplicitAutograd and ends up calling into https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/UnaryOps.cpp#L478
This relies on the out= variant implementation doing the resize. Which most outofplace we have don't do (including auto-generated ones) as they use the Meta implementation to compute the output shape before calling the out variant.
I assume arange is behaving in a similar way.
To help simplify backend implementations, I think we should enforce the assumption that providing the out= variant of an op is always enough to provide outofplace and inplace variants (with default fallback when needed).
That behavior should guarantee though that the outofplace variant always get a properly shaped out argument (to avoid duplicating shape logic) when not being directly called by the end user.
I expect it will be only a minor refactor on our end to achieve that and shouldn't be too constraining.
Functions to check
- [ ] abs
- [ ] arange
- [ ] logical_not
- [ ] angle
cc @bdhirsh @zou3519 what do you think?
| true
|
2,883,099,462
|
ci: Remove manylinux 2014 remnants
|
seemethere
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148028
These are the only remaining references I could find to manylinux2014,
we should probably look to remove these a bit quicker since it made it
difficult to know which Dockerfiles were important in
.ci/docker/manywheel/
> [!TIP]
> I checked if we were using these by running
> `rg 2014 .github/`
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,883,071,399
|
[CI] test upload: better check for if job is rerun disabled tests
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Some disabled test runs weren't being uploaded as disabled tests because some dynamo tests are set to mark themselves as skipped if they are failing. This makes the script think that there are fewer retries than there are actually are and that the job is not a rerun disabled tests job. Instead, query for the job name to see if it contains rerun disabled tests and fall back to counting the number of retries if querying fails
Alternate options: relax the check for the number of tests
| true
|
2,883,056,102
|
Reference the commit explicitly
|
ZainRizvi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Reference the commit tested by CI explicitly, and fail the merge if the PR was updated.
Tested locally
| true
|
2,883,042,343
|
[ONNX] Fix missed None type support in dyamic shapes string cases
|
titaiwangms
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: bug fixes"
] | 3
|
COLLABORATOR
|
In `_any_str_or_dim_in_dynamic_shapes`, we strictly guard the `dynamic_shapes` to make sure the flattened shapes are valid. But the code missed to consider None could be in the shapes.
NOTE: Found in benchmarking with Olive.
| true
|
2,883,029,680
|
[CUDA][complex] skip `test_reference_numerics_large_jiterator_unary_cuda_complex64` on CUDA
|
eqy
|
closed
|
[
"module: cuda",
"module: complex",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: jiterator"
] | 6
|
COLLABORATOR
|
already skipped on ROCM for a similar reason, recent numpy versions changed convention from `nan+infj` to `-inf+infj`
cc @ptrblck @msaroufim @jerryzh168 @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames
| true
|
2,883,004,495
|
torch.utils.checkpoint preserves torch function mode stack during recompute
|
soulitzer
|
open
|
[
"release notes: autograd",
"topic: bug fixes"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148023
* #146633
Fixes https://github.com/pytorch/pytorch/issues/147995
TorchFunctionModeTLS is part of the autograd tls, but because .backward() itself is a leaf for TorchFunctionMode, the mode is disabled before we enter into the engine. Conversely, since TorchDispatchMode traces through the .backward() python call, we don't actually need to manually stash/restore if the user keeps the same mode enabled.
| true
|
2,882,982,076
|
add doc and test
|
wz337
|
closed
|
[
"oncall: distributed"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wconstab @d4l3k @c-p-i-o
| true
|
2,882,972,415
|
[dynamo] run-only recursively on recompile limit exceeded
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148021
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,882,955,528
|
[dynamo] expose code execution strategy to python
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148021
* __->__ #148020
@anijain2305 this can be used to mark a code object to be skipped/run-only (recursively) while tracing.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,882,916,536
|
[while_loop][inductor] relax the constraint that all inputs must be on the same device
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148019
Previously, we require all inputs of while_loop to be on the same device. However, there're use cases where we want to keep some of the inputs on cpu while others on gpu e.g. an loop_idx on cpu will save the gpu to device copies. This PR relaxes the constraint and only check if carry and input at the same position have the same device.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,882,899,912
|
Make torch.serialization.skip_data work with torch.load
|
mikaylagawarecki
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148018
* #147788
* #147787
* #147786
| true
|
2,882,897,077
|
[export] Sync aoti schema to schema.py
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 10
|
CONTRIBUTOR
|
Summary: Synchronizing internal AOTI schema to OSS schema.py
Test Plan: CI
Differential Revision: D70271151
| true
|
2,882,892,058
|
[test] workflow start up failure
|
hashupdatebot
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,882,885,283
|
[DTensor][Test] Add a test to demonstrate current dtensor view behavior if redistribution happens
|
wz337
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This does not fix the view op issue when redistribution happens. We want to add a test to demonstrate/record the issue, in which the distributed behavior does not match up with single device behavior.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wconstab @d4l3k @c-p-i-o
| true
|
2,882,879,956
|
DISABLED test_wait_tensor (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"oncall: distributed",
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d",
"module: unknown"
] | 16
|
NONE
|
Platforms: inductor, rocm, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_wait_tensor&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37870922289).
Over the past 3 hours, it has been determined flaky in 27 workflow(s) with 54 failures and 27 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_wait_tensor`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/distributed/test_c10d_functional_native.py", line 706, in setUp
dist.init_process_group(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 95, in wrapper
func_return = func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1638, in init_process_group
raise ValueError("trying to initialize the default process group twice!")
ValueError: trying to initialize the default process group twice!
```
</details>
Test file path: `distributed/test_c10d_functional_native.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/distributed/test_c10d_functional_native.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000
| true
|
2,882,879,852
|
DISABLED test_recompile (__main__.AutoFunctionalizeTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: flaky-tests",
"skipped",
"module: unknown",
"oncall: pt2"
] | 5
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_recompile&suite=AutoFunctionalizeTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37873997305).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_recompile`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_auto_functionalize.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_auto_functionalize.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @chauhang @penguinwu
| true
|
2,882,874,759
|
Fixed grammar error
|
bmelkeysancsoft
|
closed
|
[
"open source"
] | 5
|
NONE
|
Added a missing word
| true
|
2,882,833,122
|
[Inductor] fix `AOTInductorTestABICompatibleGpu.test_triton_kernel_weird_param_order` with new Triton
|
anmyachev
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 10
|
COLLABORATOR
|
In this case, the parameters have already been filtered [here](https://github.com/pytorch/pytorch/blob/201666d77dd980d71e392f705d7fac6256ee28f9/torch/_inductor/codegen/cpp_wrapper_gpu.py#L335) and subsequent filtering is not only unnecessary, it breaks the code, since the positions of the parameters change after filtering. For this test, for example, the second filtering discarded `buf0`.
For example:
```python
(Pdb) triton_meta["signature"]
{'in_ptr0': '*fp32', 'in_ptr1': '*fp32', 'n_elements': 'i32', 'BLOCK_SIZE': 'constexpr', 'out_ptr': '*fp32'}
(Pdb) call_args
['arg0_1', 'arg0_1', '256L', 'buf0']
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @davidberard98 @YUNQIUGUO
| true
|
2,882,829,944
|
Back out "Only call triton in worker process, ahead of time compile"
|
jamesjwu
|
closed
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148010
Original commit changeset: 5e70e713d95b
Original Phabricator Diff: D69123174
Differential Revision: [D70210584](https://our.internmc.facebook.com/intern/diff/D70210584/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,882,827,990
|
Back out "Only call triton in worker process, ahead of time compile"
|
jamesjwu
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148009
Original commit changeset: 5e70e713d95b
Original Phabricator Diff: D69123174
Differential Revision: [D70210584](https://our.internmc.facebook.com/intern/diff/D70210584/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,882,822,852
|
Back out "Only call triton in worker process, ahead of time compile"
|
jamesjwu
|
closed
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148008
Original commit changeset: 5e70e713d95b
Original Phabricator Diff: D69123174
Differential Revision: [D70210584](https://our.internmc.facebook.com/intern/diff/D70210584/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,882,818,921
|
[dynamo] Make `nonstrict_trace` work with some `pytree.register_constant`-ed instances
|
StrongerXi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148386
* __->__ #148007
* #148385
As title, this enables `nonstrict_trace`-ed function to take in object
whose type has been `pytree.register_constant`-ed, as long as the object
existed outside the `torch.compile` region. This also forces Dynamo to
emit a `EQUALS_MATCH` guard on the object.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,882,801,031
|
[Inductor] Use generic GPU device in test_preserves_strides
|
alexbaden
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 7
|
COLLABORATOR
|
#147861 added a new test tagged for the generic GPU but uses the cuda GPU type for creating the tensors. Update the GPU type to also be generic. This passes with my local Intel Triton install, presumably it will work for the current pin.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,882,794,741
|
Generate AOTI input check by default
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export",
"module: aotinductor"
] | 8
|
CONTRIBUTOR
|
Summary:
Generate AOTI size and stride input check by default. But the checks are only run if `AOT_INDUCTOR_DEBUG_COMPILE` env variable is set (to avoid slowing down the performance).
Example output:
```cpp
bool _check_aoti_runtime_check_inputs_env() {
const static char* env_var_value = getenv("AOTI_RUNTIME_CHECK_INPUTS");
const static bool result = env_var_value != nullptr && env_var_value[0] != '\0';
return result;
}
AOTI_NOINLINE static void __check_inputs_outputs(
AtenTensorHandle* input_handles,
AtenTensorHandle* output_handles) {
if (!_check_aoti_runtime_check_inputs_env()){
return;
}
//rest of the check
}
```
Test Plan: CI
Differential Revision: D70260490
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @desertfire
| true
|
2,882,792,284
|
[inductor][ck] kBatch filtering with gen_ops
|
coconutruben
|
closed
|
[
"module: rocm",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
# Why
not all choices of kBatch are valid and will lead to a runtime error (when CK checks the validity of the args)
https://github.com/ROCm/composable_kernel/blob/c9bcfd755ed4d2102d76a6f545ac6e9a030d7d8e/include/ck/tensor_operation/gpu/grid/gridwise_gemm_xdl_cshuffle_v3_multi_d.hpp#L1020
# What
- move kBatch inside the gen_ops to have more control over it, and be able to filter it
- expand filtering based on the cpp logic
- refactor the padding checks to be more readable
Test Plan:
```
buck2 run -c fbcode.re_gpu_tests=False mode/opt-amd-gpu fbcode//deeplearning/aot_inductor/benchmark/sampling:test_gemm_autotune_benchmark_AMD_block_0
```
with
kBatch = 128: some filering
kBatch = 1: no filering
kBatch = 1738: all options filtered out
Reviewed By: henrylhtsang
Differential Revision: D70211442
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,882,783,750
|
Error : torch/utils/_sympy/interp.py:176] [0/2] failed while executing pow_by_natural([VR1, int_oo], VR[-1, -1]])
|
sheridanbowman
|
open
|
[
"needs reproduction",
"triaged",
"oncall: pt2"
] | 0
|
NONE
|
### 🐛 Describe the bug
Using the Diffusers library to generate stable diffusion images on a nvidia/cuda:12.4.0-runtime-ubuntu22.04 docker image. on a torch.compile()'d stable diffusion model:
```torch/utils/_sympy/interp.py:176] [0/2] failed while executing pow_by_natural([VR1, int_oo], VR[-1, -1]])```
Error occurs.. but doesnt crash the script. Only occurs for some generations, and not others
Here's the list of python packages in use :
torch torchvision --index-url "https://download.pytorch.org/whl/cu124"
xformers --index-url "https://download.pytorch.org/whl/cu124"
compel==2.0.3 \
accelerate==0.30.1 \
diffusers==0.32.2 \
transformers==4.48.0 \
torchao==0.8.0 \
matplotlib==3.8.4 \
peft==0.10.0 \
opencv-python==4.6.0.66 \
open_clip_torch==2.30.0 \
pandas==2.2.3 \
pillow==9.2.0 \
scikit-image==0.19.3 \
scikit-learn==1.1.3 \
scipy==1.13.1 \
timm==0.8.19.dev0 \
numpy==1.26.4 \
pytorch-lightning==2.2.5
I'll be updating this issue after testing if it still occurs without torch.compile(), and after narrowing down the specific generation criteria that are causing the issue (Maybe its one set of X/Y image-dimensions fed in, or being asked out of Diffusers that cause the issue), and narrowing down which file/line in diffusers generation library is causing the error
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 572.16
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 4223.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves umip gfni vaes vpclmulqdq rdpid fsrm flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 25 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Vulnerable: No microcode
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] open_clip_torch==2.30.0
[pip3] pytorch-lightning==2.2.5
[pip3] torch==2.6.0+cu124
[pip3] torchao==0.8.0
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu
| true
|
2,882,779,953
|
[while_loop] require stride to be the same as input for body_fn
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148580
* __->__ #148002
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,882,739,529
|
[async TP] insert reshape node to handle "reshape -> scaled mm -> reshape pattern" in async TP with rowwise scales
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (pipeline)",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 31
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/torchtitan/issues/864
## Summary
While testing torchtitan with float8 training with rowwise scaling + async TP, a [bug](https://github.com/pytorch/torchtitan/issues/864) was discovered. The symptom was the scaling factor dims did not match the dims of the tensor the scales were to be applied to.
My [root cause analysis](https://github.com/pytorch/torchtitan/issues/864#issuecomment-2672465060) determined the reason is that when async TP graph manipulation constructs the `fused_scaled_matmul_reduce_scatter` op, it does not yet handle the "reshape -> scaled mm -> reshape" pattern used in torchao [here](https://github.com/pytorch/ao/blob/ed361ff5c7dd33aba9b4a0da2bd744de5a5debfb/torchao/float8/float8_linear.py#L122-L124) - specifically when row-wise scales are being used.
## TL;DR of root cause
- When a Float8Tensor is reshaped, the scale is reshaped along with it so the dimensions are aligned.
- In the graph manipulation logic of the micropipeline TP post grad pass, the scaled_mm `A tensor` node is referencing the tensor _before_ to the reshape op, but referencing the `A_scale` node _after_ the reshape op.
## Example
- Concrete example:
- `A tensor` is a Float8Tensor with shape (1,8192,2048) and scale of shape (1,8192,1) when a matmul op is called in torchao [here](https://github.com/pytorch/ao/blob/8706d3f3b087b876d625c720e98236c265c0ba98/torchao/float8/float8_linear.py#L70). Torchao does a reshape -> scaled mm -> reshape [here](https://github.com/pytorch/ao/blob/ed361ff5c7dd33aba9b4a0da2bd744de5a5debfb/torchao/float8/float8_linear.py#L122). When a Float8Tensor is reshaped, its scale is reshaped along with it [here](https://github.com/pytorch/ao/blob/8706d3f3b087b876d625c720e98236c265c0ba98/torchao/float8/float8_ops.py#L152). So the first reshape makes the "A tensor" (1,8192,2048) => (8192,2048) and the scale (1,8192,1) => (8192,1).
- During post grad pass in async TP:
- `A_node` has shape (1,8192,2048) (tensor from before this [reshape](https://github.com/pytorch/ao/blob/ed361ff5c7dd33aba9b4a0da2bd744de5a5debfb/torchao/float8/float8_linear.py#L122))
- `A_scale` has shape (8192,1) (due to reshape op above, which caused the scale to be reshaped from (1,8192,1) => (8192,1)).
## Solution
**Note:** the compiler inserts a `reciprocal` op after the reshape, so we can't simply use the node before the reshape as the `A_scale_node`, otherwise it will affect the numerics.
- Short-term solution: if the specific pattern showne below is detected, insert a reshape node after the reciprocal, to reshape the reciprocal output back to the originals shape before the reshape.
- reshape is just a view, so there should be no impact on performance
```
Before:
reshape (a,bc,) to (a*b,c) -> reciprocal
After:
reshape (a,bc,) to (a*b,c) -> reciprocal -> reshape (a*b,c) to (a,b,c)
```
- Long-term solution: implement a `torch._scaled_matmul` which can support 3D+ `A tensor`
## Test plan
- Added unit test which exercises this new path
- Manually tested with torchtitan with float8 rowwise + async TP
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,882,736,697
|
Remove +PTX from cuda 12.6 builds
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Similar to: https://github.com/pytorch/pytorch/pull/141142
Ahead of the release 2.7
I see following validation failure: https://github.com/pytorch/test-infra/actions/runs/13552433445/job/37879041739?pr=6339
```
RuntimeError: Binary size of torch-2.7.0.dev20250226+cu126-cp310-cp310-manylinux_2_28_x86_64.whl 1076.45 MB exceeds the threshold 750 MB
```
| true
|
2,882,705,849
|
Adjust test_mm_triton_kernel_benchmark for unpadded tensors
|
iupaikov-amd
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"rocm"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
During ROCm runs we naturally have those tests show that padding path will be slower for our archs and the pad_mm chooses to opt out of padding thus failing those tests.
Reasoning for this is per my understanding those tests don't check IF the operation should be padded in the first place, but HOW is it padded and if it's done in a correct way. More than that the tests shouldn't really be hardware dependent or have some condition for them.
More info in discussion here: https://github.com/pytorch/pytorch/pull/147620#issuecomment-2679943847
### Alternatives
Create a separate version of test for unpadded tensors.
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @shunting314 @eellison @jataylo @jeffdaily
| true
|
2,882,637,765
|
Add _fft_r2c as core ATen
|
larryliu0820
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
As titled.
| true
|
2,882,618,983
|
Fix decomp for linspace
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147997
In python decompositions, we shouldn't do any non-functional operations for functional operators. This should go away once we start decomposing before functionalization.
Differential Revision: [D70265200](https://our.internmc.facebook.com/intern/diff/D70265200)
| true
|
2,882,575,449
|
[do not merge yet] update grammar
|
sokkaofthewatertribe
|
closed
|
[
"open source",
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 16
|
CONTRIBUTOR
| null | true
|
2,882,542,075
|
Checkpoint doesn't work with torch_function if torch_function change tensor metadata
|
fegin
|
open
|
[
"module: activation checkpointing",
"triaged",
"module: __torch_function__"
] | 9
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We are tying to use `TorchFunctionMode` to convert the input tensors of SDPA to DTensor (if they are not). Unfortunately this approach fails. Digging into the detail, this seems to be a fundamental limitation of checkpoint as checkpoint is not aware of `__torch_function__`. Below is a minimal repro which utilizes `__torch_function__` to reshape the input tensors.
```
import torch
from torch.utils.checkpoint import checkpoint
from torch.overrides import TorchFunctionMode
def func(x, y) -> None:
return torch.matmul(x, y)
class DistributeFunction(TorchFunctionMode):
def __torch_function__(self, func, types, args, kwargs=None):
if kwargs is None:
kwargs = {}
if func != torch.matmul:
return func(*args, **kwargs)
a0 = args[0].reshape((-1, 128))
a1 = args[1].reshape((128, -1))
return func(a0, a1)
with DistributeFunction():
a = torch.randn(64, 64)
a.requires_grad = True
out = checkpoint(func, a, a, use_reentrant=False)
out.sum().backward()
```
Checkpoint complains metadata mismatch:
```
File "/data/users/chienchin/mywork/pytorch/test.py", line 16, in __torch_function__
return func(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/data/users/chienchin/mywork/pytorch/torch/autograd/__init__.py", line 353, in backward
_engine_run_backward(
File "/data/users/chienchin/mywork/pytorch/torch/autograd/graph.py", line 824, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/data/users/chienchin/mywork/pytorch/torch/utils/checkpoint.py", line 1129, in unpack_hook
frame.check_recomputed_tensors_match(gid)
File "/data/users/chienchin/mywork/pytorch/torch/utils/checkpoint.py", line 903, in check_recomputed_tensors_match
raise CheckpointError(
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 0:
saved metadata: {'shape': torch.Size([128, 32]), 'dtype': torch.float32, 'device': device(type='cpu')}
recomputed metadata: {'shape': torch.Size([64, 64]), 'dtype': torch.float32, 'device': device(type='cpu')}
tensor at position 1:
saved metadata: {'shape': torch.Size([32, 128]), 'dtype': torch.float32, 'device': device(type='cpu')}
recomputed metadata: {'shape': torch.Size([64, 64]), 'dtype': torch.float32, 'device': device(type='cpu')}
```
Is there any way to make this `__torch_function__` work with Checkpoint?
### Versions
nightly
cc @soulitzer @hameerabbasi @rgommers @ezyang
| true
|
2,882,518,340
|
[CI] Don't clean workspace when fetching repo
|
clee2000
|
closed
|
[
"module: rocm",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
Tested on https://github.com/pytorch/pytorch/pull/148995
Do two checkouts: first one attempts to use an existing checkout if possible. The second one removes the workspace and re pulls everything if the first one fails
This is probably not going to be useful if we switch entirely to ephemeral runners but w/e
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,882,510,205
|
[ROCm] Use generated CK config.h rather than system
|
alugorey
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 7
|
CONTRIBUTOR
|
prevents pytorch from potentially using system version of config.h and instead prioritize the CK submodule's version
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,882,507,362
|
[export] Add export_cache
|
angelayi
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147992
* #147863
* #147862
I'm open to better names!
export_cache is a hackier version of @mark_compiled_region.
In strict-export, the behavior will be the same as @mark_compiled_region,
which is that it will differentiate calls to the same function by
strict-exporting each function call, and compare the graphs and inputs to
determine if we should deduplicate the graphs.
But when compiling with non-strict export, @export_cache will
differentiate calls to the same function by **only the input metadata**. It
also differs in that it can only differentiate calls to *different
functions* based on the ``name`` argument.
| true
|
2,882,448,520
|
Verifier (in torch.export.export) does not make use of if-condition inside branches
|
gramalingam
|
open
|
[
"triaged",
"oncall: pt2",
"oncall: export"
] | 5
|
NONE
|
The exporter fails because it is unable to verify that the condition in a torch.cond holds true within the then branch. In the example below, it fails and produces the following error message
```
torch._dynamo.exc.UserError: Constraints violated (sequence_length)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of sequence_length = L['args'][0][0].size()[1] in the specified range satisfy the generated guard 6 <= L['args'][0][0].size()[1] and L['args'][0][0].size()[1] <= IntInfinity()
Suggested fixes:
sequence_length = Dim('sequence_length', min=6)
```
even though the true-branch executes only when sequence_length is 6 or more. Is there any work-around for such cases?
```py
import torch
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
def forward(self, X):
def true_fn(X):
bs, sl, n = X.shape
torch._check(sl > 5)
return X + 1
def false_fn(X):
return X + 2
bs, sl, n = X.shape
return torch.cond(sl > 5, true_fn, false_fn, (X,))
model = MyModule()
model.eval()
batch_dim = torch.export.Dim("batch_size")
sequence_dim = torch.export.Dim("sequence_length")
dynamic_shapes = { 'X': {0: batch_dim, 1: sequence_dim}, }
B = 2
S = 700
N = 12
X = torch.randn(B, S, N)
inputs = (X,)
program = torch.export.export(model, inputs, dynamic_shapes=dynamic_shapes, strict=False)
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,882,376,059
|
Support `contextlib.suppress`
|
guilhermeleobas
|
open
|
[
"open source",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147990
* #146506
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,882,348,138
|
doc/xpu: align description of SyclExtension with CPP/CUDA
|
dvrogozh
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: xpu"
] | 5
|
CONTRIBUTOR
|
This commit just aligns description of `py_limited_api` feature in SyclExtension with CPP/CUDA. We've missed this change on doing SyclExtension due to parallel work on the changes. For CPP/CUDA change was done in 515e55e6927ad5f57ec222d7779712630341acf3.
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5
| true
|
2,882,309,655
|
torch.utils._content_store: fix error in hash_storage on XPU
|
benjaminglass1
|
closed
|
[] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147985
See https://github.com/pytorch/pytorch/actions/runs/13508573465/job/37745227468 for an example error. This is triggering after the merge of #147541, which enabled Dynamo compilation on XPU.
| true
|
2,882,260,880
|
xpu: test py_limited_api with SyclExtension
|
dvrogozh
|
open
|
[
"open source",
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Commit extends existing CUDA test to cover XPU SyclExtension case for the same feature - `py_limited_api`.
> [!NOTE]
> THE CHANGE CAN NOT BE MERGED AS IS
> Change requires update of the commit pin for torch-xpu-ops.
Requires: https://github.com/intel/torch-xpu-ops/pull/1405
CC: @guangyey
| true
|
2,882,251,808
|
Introduce delayed compile via `eager_then_compile` stance
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147983
Recently I've been experimenting with introducing new APIs to delay compile as a way to reduce compile times while improving the ergonomics of using dynamic shapes. The high level idea is to run the first invocation of compile in eager, save the example inputs, and on the second invocation we can derive the dynamism in the inputs so that we don't need to waste our time doing a compile with static shapes (which is the status quo today with automatic dynamic).
Another benefit of this is most users no longer need to annotate their inputs with mark_dynamic and mark_unbaked calls since we can derive the dynamism on the very first call. Additionally we get dynamic ints out of the box in this new regime.
This PR implements this idea through the set_stance APIs. In particular it introduces a new `eager_then_compile` stance.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,882,251,690
|
introduce dynamism library
|
bobrenjc93
|
closed
|
[
"release notes: fx",
"fx"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147983
* __->__ #147982
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,882,237,399
|
introduce dynamism library
|
bobrenjc93
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ci-no-td"
] | 25
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147981
This is the first step in supporting delayed compile. This library takes in example inputs and outputs a dict of dynamism across the inputs. We will use this to detect dynamism across multiple inputs in delayed compile. We will also use this to make shape collections more ergonomic by providing an affordance to generate a shape collection using example inputs.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,882,237,050
|
Torch distribute module when local dtensor is already on the correct device
|
ArthurZucker
|
open
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 1
|
NONE
|
Hey! I am working on this pr: https://github.com/huggingface/transformers/pull/36335
And I had to remove the https://github.com/pytorch/pytorch/blob/main/torch/distributed/tensor/_api.py#L834 distribute module function because it was broadcasting something, when I don't need it to because each process should be able to say "I am RANK and I have my local tensor.". This is a huge overhoad for bigger models (Noticeable on a 7B in the repro of the linked PR)
🤗
Repro:
```python
# torchrun --master-addr 127.0.0.1 --nnodes 1 --nproc-per-node 4 /raid/arthur/test_safe_load.py
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
import os
import time
from torch.cuda.memory import caching_allocator_alloc
import torch
import time
from transformers import AutoModelForCausalLM
from torch.profiler import profile, record_function, ProfilerActivity
model_path = "meta-llama/Meta-Llama-3-8B-Instruct"
# On main you need to init nccl manually
# rank = int(os.environ["RANK"])
# world_size = int(os.environ["WORLD_SIZE"])
# torch.distributed.init_process_group("nccl", rank=rank, world_size=world_size)
# torch.cuda.set_device(rank)
with torch.no_grad():
tokenizer = AutoTokenizer.from_pretrained(model_path)
start = time.time()
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
record_shapes=True, profile_memory=True) as prof:
with record_function("model_load"):
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
# tp_plan="auto",
device_map="auto",
attn_implementation="sdpa"
)
end = time.time()
print(f"Model loading time: {end - start:.2f} seconds")
print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=25))
print(f"Loading took {end-start} seconds")
model.eval()
input_ids =tokenizer(["Roses are red,"], return_tensors="pt",add_special_tokens=True).to("cuda")
out = model.generate(**input_ids, max_new_tokens=20)
print(out)
print(tokenizer.batch_decode(out))
```
relevant piece of code:
```python
if device_mesh is not None: # In this case, the param is already on the correct device!
try:
module_to_tp: torch.nn.Module = model.get_submodule(layer)
except Exception:
raise ValueError(
"The config tp plan is wrong because the layer is not a liner layer, nor an embedding"
)
current_module_plan = None
full_tp_plan_ = "|".join(full_tp_plan.keys()).replace("*", "[0-9]+")
if plan := re.search(full_tp_plan_, module_name):
match = re.sub("[0-9]+", "*", plan[0])
current_module_plan = full_tp_plan[match]
if current_module_plan is not None:
tp_layer = translate_to_torch_parallel_style(current_module_plan)
rank = tensor_device
row, col = empty_param.shape
if "rowwise" == current_module_plan:
param = param[:, rank * (col // device_mesh.size()) : (rank + 1) * (col // device_mesh.size())]
shard = Shard(1)
tp_layer.desired_input_layouts = (Shard(-1),)
elif "colwise" == current_module_plan:
param = param[rank * (row // device_mesh.size()) : (rank + 1) * (row // device_mesh.size()), :]
shard = Shard(0)
else:
param = param[rank * (row // device_mesh.size()) : (rank + 1) * (row // device_mesh.size()), :]
shard = Shard(0)
local_parameter = DTensor.from_local(
param,
device_mesh=device_mesh,
placements=[shard] * device_mesh.ndim,
)
if isinstance(module_to_tp.weight, nn.Parameter):
local_parameter = torch.nn.Parameter(local_parameter)
module_to_tp.weight = local_parameter
input_fn = partial(
tp_layer._prepare_input_fn, tp_layer.input_layouts, tp_layer.desired_input_layouts
)
output_fn = partial(
tp_layer._prepare_output_fn, tp_layer.output_layouts, tp_layer.use_local_output
)
distribute_module(module_to_tp, device_mesh, None, input_fn, output_fn)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,882,206,277
|
Support whitelist of dynamic sources
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147979
This PR introduces the ability to whitelist sources as dynamic. This is particularly useful for large models with graph breaks, as you can keep the dynamism across graph breaks since source names stay consistent. Additionally you can use this to mark ints as dynamic.
NB: I intentionally didn't complicate the interface by supporting specification of per dimension dynamism. There is virtue in keeping true to the standard way of representing sources (eg. L['x']). If we find in practice that we need more more fine grained control, we can explore further affordances at that time.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,882,156,954
|
[BE][Ez]: Remove redundant empty tensor copies in meta-reg
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Empty_likes includes a memory_format arg. Let's use it to avoid unnecessary copy operations. Noticed while reviewing: https://github.com/pytorch/pytorch/pull/147862
| true
|
2,882,139,483
|
[BE] Do not copy arguments in variadic template
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147369
* __->__ #147977
By adding missing `std::forward<Args>(args)...` and declaring template as passing args by reference
Noticed while working on creating `mtl_setBytes` specification that takes `MPSScalar` as argument
| true
|
2,882,139,224
|
[Testing] Tensor.set_ storage offset validation
|
mikaylagawarecki
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ciflow/slow",
"ci-no-td"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147976
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,882,057,413
|
[AOTI][refactor] Consolidate CppBuilder.build and CppBuilder.build_fbcode
|
desertfire
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: Let CppBuilder handle all the cpp build logic
Differential Revision: D70141808
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,882,041,558
|
DISABLED test_ranks_and_tag (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d"
] | 15
|
NONE
|
Platforms: inductor, rocm, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_ranks_and_tag&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37842166279).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_ranks_and_tag`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/distributed/test_c10d_functional_native.py", line 706, in setUp
dist.init_process_group(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 95, in wrapper
func_return = func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1638, in init_process_group
raise ValueError("trying to initialize the default process group twice!")
ValueError: trying to initialize the default process group twice!
```
</details>
Test file path: `distributed/test_c10d_functional_native.py`
cc @clee2000
| true
|
2,881,922,377
|
Torch Onnx Export (with Dynamo) does not recognize `Remainder` function
|
FabianSchuetze
|
open
|
[
"module: onnx",
"triaged",
"OSS contribution wanted",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The following code fails:
```
import torch
class Mod(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, window_size):
_, H, W, C = x.shape
pad_h = (window_size - H % window_size) % window_size
pad_w = (window_size - W % window_size) % window_size
pad_h_ = pad_h.item()
pad_w_ = pad_w.item()
torch._check_is_size(pad_h_)
torch._check_is_size(pad_w_)
x = torch.nn.functional.pad(x, (0, 0, 0, pad_w_, 0, pad_h_))
return x
mod = Mod()
x = torch.rand(1, 200, 204, 3)
torch.onnx.export(mod, (x, torch.tensor(8)), dynamo=True, report=True)
```
with the following error:
```
➜ /tmp python3 main.py
/home/fabian/.local/lib/python3.12/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/home/fabian/.local/lib/python3.12/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
[torch.onnx] Obtain model graph for `Mod()` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `Mod()` with `torch.export.export(..., strict=False)`... ✅
[torch.onnx] Run decomposition...
W0226 15:56:09.858000 98676 torch/fx/experimental/symbolic_shapes.py:6184] Ignored guard u6 >= 0 == True, this could result in accuracy problems
W0226 15:56:09.863000 98676 torch/fx/experimental/symbolic_shapes.py:6184] Ignored guard u7 >= 0 == True, this could result in accuracy problems
[torch.onnx] Run decomposition... ✅
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... ❌
[torch.onnx] Export report has been saved to 'onnx_export_2025-02-26_15-56-08-339010_conversion.md'.
Traceback (most recent call last):
File "/home/fabian/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 708, in _translate_fx_graph
_handle_call_function_node_with_lowering(
File "/home/fabian/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 490, in _handle_call_function_node_with_lowering
raise _errors.DispatchError(
torch.onnx._internal.exporter._errors.DispatchError: No ONNX function found for <OpOverload(op='prims.remainder', overload='default')>. Failure message: No decompositions registered for the real-valued input
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/fabian/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 1372, in export
onnx_program = _exported_program_to_onnx_program(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/fabian/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 1008, in _exported_program_to_onnx_program
values = _translate_fx_graph(
^^^^^^^^^^^^^^^^^^^^
File "/home/fabian/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 734, in _translate_fx_graph
raise _errors.ConversionError(
torch.onnx._internal.exporter._errors.ConversionError: Error when translating node %remainder : [num_users=1] = call_function[target=torch.ops.prims.remainder.default](args = (200, %window_size), kwargs = {}). See the stack trace for more information.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/main.py", line 21, in <module>
torch.onnx.export(mod, (x, torch.tensor(8)), dynamo=True, report=True)
File "/home/fabian/.local/lib/python3.12/site-packages/torch/onnx/__init__.py", line 351, in export
return _compat.export_compat(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/fabian/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_compat.py", line 304, in export_compat
onnx_program = _core.export(
^^^^^^^^^^^^^
File "/home/fabian/.local/lib/python3.12/site-packages/torch/onnx/_internal/exporter/_core.py", line 1416, in export
raise _errors.ConversionError(
torch.onnx._internal.exporter._errors.ConversionError: Failed to convert the exported program to an ONNX model. This is step 3/3 of exporting the model to ONNX. Next steps:
- If there is a missing ONNX function, implement it and register it to the registry.
- If there is an internal error during ONNX conversion, debug the error and summit a PR to PyTorch.
- Create an error report with `torch.onnx.export(..., report=True)`, and save the ExportedProgram as a pt2 file. Create an issue in the PyTorch GitHub repository against the *onnx* component. Attach the error report and the pt2 model.
Error report has been saved to 'onnx_export_2025-02-26_15-56-08-339010_conversion.md'.
## Exception summary
<class 'torch.onnx._internal.exporter._errors.DispatchError'>: No ONNX function found for <OpOverload(op='prims.remainder', overload='default')>. Failure message: No decompositions registered for the real-valued input
⬆️
<class 'torch.onnx._internal.exporter._errors.ConversionError'>: Error when translating node %remainder : [num_users=1] = call_function[target=torch.ops.prims.remainder.default](args = (200, %window_size), kwargs = {}). See the stack trace for more information.
(Refer to the full stack trace above for more information.)
```
Can it be that `torch.remainder` is not linked with [`Onnx.Mod`](https://onnx.ai/onnx/operators/onnx__Mod.html) ? `Torch.export.export` works. The code above is used in many `window_partition` functions for ViTs.
Is there a way I can export the code with onnx?
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-17-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 500 Ada Generation Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 22
On-line CPU(s) list: 0-21
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 7 155H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 27%
CPU max MHz: 4800.0000
CPU min MHz: 400.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 896 KiB (14 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-21
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.2
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.2.0
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
cc @chauhang @penguinwu
| true
|
2,881,886,561
|
torch._scaled_mm reproductibility
|
christopher5106
|
closed
|
[
"needs reproduction",
"triaged",
"module: float8"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hi,
I'm using `torch._scaled_mm` method as in this example: https://github.com/aredden/flux-fp8-api/blob/main/float8_quantize.py#L284 but I don't get reproductible results on H100 and H200 machines.
I tried to set `use_fast_accum=False` and `torch.backends.cudnn.deterministic = True` but there are still slight variations.
What should I do to ensure same results to the bit ?
thanks
### Versions
torch-2.7.0.dev20250211
cc @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
2,881,549,557
|
Fp8 scaled-mm row-wise is substantially slower than tensor-wise
|
lw
|
open
|
[
"module: performance",
"module: cuda",
"triaged",
"topic: performance",
"module: float8"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
### Overview
PyTorch's float8 matmul, exposed through the `_scaled_mm` operator, support two scaling modes: "tensor-wise", which just calls into a public cuBLAS API, and "row-wise", which is implemented as a custom CUTLASS kernel in PyTorch.
On top of that, `_scaled_mm` has two modes: fast-accum, which accumulates all intermediate values inside the TensorCore's "fp22" accumulators, and slow-accum, which promotes these accumulators to full fp32 precision every 128 elements.
As things stand today, row-wise mode is substantially slower than tensor-wise mode. It's unclear whether this is a substantial limitation (row-wise needs to load a little bit more data) or if it comes from a suboptimal implementation in CUTLASS (which I deem more likely). It would be valuable to close this gap.
### Benchmarks with `fast_accum=True`
This mode was optimized by me in https://github.com/pytorch/pytorch/pull/134781, where I improved the heuristics of choosing tile sizes. Most shapes are within 90-100% of tensor-wise efficiency, but there's a long tail going down to 50%.
Relative latencies of row-wise wrt tensor-wise:

Visualization for each shape:

### Benchmarks with `fast_accum=False`
In this mode, the bulk of the shapes is only around 80% efficient, with very few shapes going beyond 90%.
Relative latencies of row-wise wrt tensor-wise:

Visualization for each shape:

### Benchmark code and full results
The benchmark script is basically the same used in #134781.
The raw measurements can be found here: https://gist.github.com/lw/58e660a5f8fa41f72029fdebb8417280
### Versions
N/A
cc @msaroufim @ptrblck @eqy @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
2,881,492,508
|
DISABLED test_complex_half_reference_testing_fft_irfft_cuda_complex32 (__main__.TestCommonCUDA)
|
pytorch-bot[bot]
|
closed
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"module: unknown"
] | 2
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_complex_half_reference_testing_fft_irfft_cuda_complex32&suite=TestCommonCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37831932681).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_complex_half_reference_testing_fft_irfft_cuda_complex32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/test_ops.py", line 1431, in test_complex_half_reference_testing
self.assertEqual(actual, expected, exact_dtype=False)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4096, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 0 / 144 (0.0%)
Greatest absolute difference: 0.0 at index (0, 0, 0) (up to 0.04 allowed)
Greatest relative difference: 0.0 at index (0, 0, 0) (up to 0.04 allowed)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3155, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1239, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1616, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(2, 9, 9), device="cuda:0", dtype=torch.complex32], args=(), kwargs={'n': '8', 'dim': '1', 'norm': "'ortho'"}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_WITH_ROCM=1 python test/test_ops.py TestCommonCUDA.test_complex_half_reference_testing_fft_irfft_cuda_complex32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_ops.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000
| true
|
2,881,320,580
|
[Intel GPU] Avoid including CPU oneDNN header files for Intel GPU
|
EikanWang
|
closed
|
[
"module: cpu",
"module: mkldnn",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 3
|
COLLABORATOR
|
XPU builds oneDNN in another folder. The XPU oneDNN head files are in the XPU-specific folder - `${__XPU_MKLDNN_BUILD_DIR}`.
https://github.com/pytorch/pytorch/blob/f522d899fb297453d0b821140bac38c1b4eef569/cmake/Modules/FindMKLDNN.cmake#L73
So, `${PROJECT_SOURCE_DIR}/third_party/ideep/mkl-dnn/include` is useless for XPU. `XPU_MKLDNN_INCLUDE` is good enough. Meanwhile, it may mess up the included files if the version of XPU oneDNN differs from other backends.
* __->__ #147969
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,881,268,861
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [86aaaf8a9dd6932c088b7afcac0c0856b23d341a](https://github.com/intel/torch-xpu-ops/commit/86aaaf8a9dd6932c088b7afcac0c0856b23d341a), includes:
- Bugfix (PT2E/BatchNorm)
| true
|
2,881,139,373
|
[torch/elastic][upstream] Fix the wrong order when start_index is not 0
|
zhengchenyu
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"Stale",
"module: elastic",
"release notes: dataloader"
] | 3
|
NONE
|
For ElasticDistributedSampler. If job is restarted, we will train from start_index. It means that indices should keep order. But it doesn't actually keep the order.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @dzhulgakov
| true
|
2,881,095,175
|
Add option to limit number of SMs used by matmul kernels
|
lw
|
closed
|
[
"module: cuda",
"Merged",
"release notes: cuda",
"topic: performance",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147966
Resubmission of #144974 which was reverted for unrelated reasons.
Newer matmul kernels, e.g. those targeting Hopper GPUs, sometime use a "persistent" schedule which consists in launching as many CUDA blocks as there are SMs on the GPU, with each such block then working on multiple output tiles in a row. This allows to eliminate the overhead of starting and finishing each tile, effectively doing cross-tile pipelining. In previous generations these latencies could be hidden by having multiple CUDA blocks per SM but, with blocks becoming larger, only one can run at a time per SM and thus this needs to be taken care of in software.
Persistent kernels become an issue when other kernels are running concurrently. The classical example is a NCCL communication kernel running in the background. In such cases the matmul expects to be able to use all the SMs but is prevented from doing so because some of the are busy. This can lead to its blocks being scheduled as two separate waves on the available SMs. This "wave quantization" can double the latency of the matmul kernels.
While we wait for smarter solutions, such as automatic load balancing among the blocks, an easy way to unblock ourselves is to tell the matmuls to only use a subset of the GPU's SMs. For this, I am introducing a global `sm_carveout` flag which can be used to specify how many SMs should be left available for other kernels.
For now I only change the cuBLAS kernels and the scaled-mm CUTLASS kernel. More kernels can be opted-in later.
I tested this change manually, by using the Kineto profiler to look up the grid size of a scaled-mm kernel with different values of `sm_carveout`, and making sure it changed. Suggestions are welcome for a more automated test.
cc @ptrblck @msaroufim @eqy @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,880,916,216
|
DISABLED test_mixed_mm_gating (__main__.TestPatternMatcher)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mixed_mm_gating&suite=TestPatternMatcher&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37833956168).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mixed_mm_gating`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 498, in test_mixed_mm_gating
self._test_mixed_impl(fn, args, True, False)
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 333, in _test_mixed_impl
FileCheck().check("k_idx").check(".to(").check("tl.dot").run(code)
RuntimeError: Expected to find ".to(" but did not find it
Searched string:
acc = tl.zeros((BLOCK_M, BLOCK_N), dtype=ACC_TYPE)
for k_idx in range(0, tl.cdiv(K, BLOCK_K)):
a_mask = offs_k[None, :] < (K - k_idx * BLOCK_K)
b_mask = offs_k[:, None] < (K - k_idx * BLOCK_K)
a_k_idx_vals = offs_k[None, :] + (k_idx * BLOCK_K)
b_k_idx_vals = offs_k[:, None] + (k_idx * BLOCK_K)
idx_m = offs_a_m[:, None]
idx_n = a_k_idx_vals
xindex = idx_n + 8*idx_m
a = tl.load(A + (xindex), mask=a_mask, other=0.0)
idx_m = b_k_idx_vals
idx_n = offs_b_n[None, :]
xindex = idx_n + 8*idx_m
b = tl.load(B + (xindex), mask=b_mask, other=0.0)
acc += tl.dot(a, b, allow_tf32=ALLOW_TF32)
# rematerialize rm and rn to save registers
rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
idx_m = rm[:, None]
idx_n = rn[None, :]
mask = (idx_m < M) & (idx_n < N)
# inductor generates a suffix
xindex = idx_n + 8*idx_m
tl.store(out_ptr0 + (tl.broadcast_to(xindex, acc.shape)), acc, mask)
''', device_str='cuda')
meta0 = {'GROUP_M': 8, 'EVEN_K': False, 'ALLOW_TF32': 'False', 'ACC_TYPE': 'tl.float32', 'BLOCK_M': 16, 'BLOCK_N': 16, 'BLOCK_K': 16, 'matrix_instr_nonkdim': 16}
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (8, 8), (8, 1))
assert_size_stride(arg1_1, (8, 8), (8, 1))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((8, 8), (8, 1), torch.float32)
# Topologically Sorted Source Nodes: [to], Original ATen: [aten._to_copy]
stream0 = get_raw_stream(0)
triton_poi_fused__to_copy_0.run(arg0_1, buf0, 64, grid=grid(64), stream=stream0)
del arg0_1
buf1 = empty_strided_cuda((8, 8), (8, 1), torch.float32)
# Topologically Sorted Source Nodes: [to, mm], Original ATen: [aten._to_copy, aten.mm]
stream0 = get_raw_stream(0)
triton_tem_fused__to_copy_mm_1.run(arg1_1, buf0, buf1, grid=torch._inductor.kernel.mm_common.mm_grid(8, 8, meta0), stream=stream0)
del arg1_1
del buf0
return (buf1, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((8, 8), (8, 1), device='cuda:0', dtype=torch.int8)
arg1_1 = rand_strided((8, 8), (8, 1), device='cuda:0', dtype=torch.float32)
fn = lambda: call([arg0_1, arg1_1])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: .to(
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_pattern_matcher.py TestPatternMatcher.test_mixed_mm_gating
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_pattern_matcher.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,876,057
|
[test][do not merge] test on 90e3a3d86d6139a7b00bdf56bdfe0f63ad18e980
|
yanbing-j
|
closed
|
[
"module: cpu",
"module: mkldnn",
"open source",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"intel",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
This is a test PR of https://github.com/pytorch/pytorch/pull/147498 code base. We need to build the windows binary, and download and test test_mkldnn.py.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,880,759,928
|
Bump Protobuf to 5.29
|
cyyever
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
CMake builds work, but I have no idea about how to fix Bazel builds.
| true
|
2,880,700,285
|
Facilitate at::_weight_int4pack_mm_with_scale_and_zeros related registration
|
ZhiweiYan-96
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 10
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147962
* #137566
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,696,488
|
[Inductor][CPP] fix store mode atomic add
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147961
**Summary**
Fix issue: https://github.com/pytorch/pytorch/issues/147848 and https://github.com/pytorch/pytorch/issues/146390. While addressing these issues, 2 problems were encountered:
- In `CppVecKernel`, when the number of threads is 1 and the mode is `atomic_add`, `store` did not `load/add` before storing. This has been fixed in this PR.
- In `CppTile2DKernel`, `store` did not support `atomic_add` mode. Support for this has been added in this PR.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_repro.py -k test_nn_fold
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,677,910
|
[TEST]
|
muchulee8
|
closed
|
[
"fb-exported",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Test Plan: test
Differential Revision: D70231766
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @amjames @chauhang @aakhundov
| true
|
2,880,664,028
|
[test][do not merge]Upgrade oneDNN to v3.7(27)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,662,346
|
[test][do not merge]Upgrade oneDNN to v3.7(26)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,661,282
|
[test][do not merge]Upgrade oneDNN to v3.7 (25)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,659,418
|
[test][do not merge]Upgrade oneDNN to v3.7 (24)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,655,680
|
[test][do not merge]Upgrade oneDNN to v3.7 (28)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ci-no-td",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,654,115
|
[Inductor-CPU] LLaMA doesn't use templated GEMMs for da8w8 quantization for next-token generation
|
sanchitintel
|
closed
|
[
"oncall: cpu inductor"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
If LLaMA models are run with torchao da8w8 quantization (dynamically quantized int8 activation & int8 quantized weights), then templated GEMMs are not being used in the next-token generation case, except for LM head linear.
Some pattern-matching passes such as `_register_quantization_binary_lowering` replace all MLP linears in the model's graph with `torch.ops.onednn.qlinear_pointwise.binary_tensor`.
https://github.com/pytorch/pytorch/blob/7a06bfdd1c778ec84a3d2334a7c66a5bcdc29f61/torch/_inductor/fx_passes/quantization.py#L1215
There's an opportunity cost of not being able to use the GEMM template for `qlinear_binary` when `sum` post-op is present instead of `add`, and `x2` is non-quantized. Without completion of this TODO item, the LLaMA model would only be able to use templated GEMMs for LM head linear for next-token generation. Thanks!
https://github.com/pytorch/pytorch/blob/acca9b9cb0a38b2eb1bfd5fe0aaff3760dd77812/torch/_inductor/mkldnn_lowerings.py#L937-L939
**Ideally, inplace sum post-op should be supported**. Creating a tracker issue.
If I try enabling GEMM template for `qlinear_binary` with `sum` post-op _without_ using inplace compute (not sure how performant it'd be), the problem I face is that the GEMM template expects 2D inputs, so 3D inputs are converted to 2D. Then the output is not being reshaped back to 3D for epilogues, which can be problematic for epilogues that index into an intermediate output.
Here's an example -
<details>
```
# in forward, code: hidden_states = residual + hidden_states
qlinear_pointwise_binary_tensor_63: "bf16[1, 1, 4096]" = torch.ops.onednn.qlinear_pointwise.binary_tensor(convert_element_type_28, view_44, None, qlinear_prepack_64, convert_element_type_default_64, None, embedding, None, 1.0, 0, torch.bfloat16, 1.0, 0, 'sum', 1.0, 'none', [], ''); convert_element_type_28 = view_44 = qlinear_prepack_64 = convert_element_type_default_64 = embedding = None
# in forward, code: hidden_states = hidden_states.to(torch.float32)
convert_element_type_30: "f32[1, 1, 4096]" = torch.ops.prims.convert_element_type.default(qlinear_pointwise_binary_tensor_63, torch.float32)
# in forward, code: variance = hidden_states.pow(2).mean(-1, keepdim=True)
pow_2: "f32[1, 1, 4096]" = torch.ops.aten.pow.Tensor_Scalar(convert_element_type_30, 2)
mean_1: "f32[1, 1, 1]" = torch.ops.aten.mean.dim(pow_2, [-1], True); pow_2 = None
# hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
add_74: "f32[1, 1, 1]" = torch.ops.aten.add.Tensor(mean_1, 1e-05); mean_1 = None
rsqrt_1: "f32[1, 1, 1]" = torch.ops.aten.rsqrt.default(add_74); add_74 = None
mul_130: "f32[1, 1, 4096]" = torch.ops.aten.mul.Tensor(convert_element_type_30, rsqrt_1); convert_element_type_30 = rsqrt_1 = None
# return self.weight * hidden_states.to(input_dtype)
convert_element_type_31: "bf16[1, 1, 4096]" = torch.ops.prims.convert_element_type.default(mul_130, torch.bfloat16); mul_130 = None
mul_131: "bf16[1, 1, 4096]" = torch.ops.aten.mul.Tensor(arg10_1, convert_element_type_31); arg10_1 = convert_element_type_31 = None
# self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
amin_4: "bf16[1, 1]" = torch.ops.aten.amin.default(mul_131, [2])
```
Here, `amin` directly uses index 2
</details>
If we were okay with not using inplace add, though, we might as well try disabling the quantization pattern-matching pass that uses `qlinear_binary`, so that `qlinear_unary` could be used instead, but we can't do that as it'd cause regressions elsewhere.
### Versions
Current main branch
cc @leslie-fang-intel @chunyuan-w
| true
|
2,880,622,994
|
Request for Binary Version of torch==2.5.1 with CUDA 12.4 for ARM 7.5 (Graviton2)
|
michaelsheka
|
closed
|
[
"module: binaries",
"triaged"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
Hi all,
First of all, I want to thank you for the amazing library!
I’m currently trying to find a binary version that meets the following specifications:
torch==2.5.1
CUDA version: 12.4
Architecture: ARM 7.5 for Graviton2
Could you please guide me on how to obtain or build this?
Thank you in advance!
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,880,529,406
|
Insert custom op to fix scale shapes for async TP + float8 rowwise scaling with "reshape -> scaled mm -> reshape" pattern
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"release notes: distributed (pipeline)",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Part of https://github.com/pytorch/torchtitan/issues/864
## Summary
While testing torchtitan with float8 training with rowwise scaling + async TP, a [bug](https://github.com/pytorch/torchtitan/issues/864) was discovered. The symptom was the scaling factor dims did not match the dims of the tensor the scales were to be applied to.
My [root cause analysis](https://github.com/pytorch/torchtitan/issues/864#issuecomment-2672465060) determined the reason is that when async TP graph manipulation constructs the `fused_scaled_matmul_reduce_scatter` op, it does not yet handle the "reshape -> scaled mm -> reshape" pattern used in torchao [here](https://github.com/pytorch/ao/blob/ed361ff5c7dd33aba9b4a0da2bd744de5a5debfb/torchao/float8/float8_linear.py#L122-L124) - specifically when row-wise scales are being used.
## TL;DR of root cause
- When a Float8Tensor is reshaped, the scale is reshaped along with it so the dimensions are aligned.
- In the graph manipulation logic of the micropipeline TP post grad pass, the scaled_mm `A tensor` node is referencing the tensor _before_ to the reshape op, but referencing the `A_scale` node _after_ the reshape op.
## Solution and example
- To solve this, if a reshape -> scaled mm -> reshape pattern is detected, we can ensure both the tensor and scale used have compatible shapes. The most generic way to do this is to introduce a custom op which will reshape the scale to match the target tensor if the dims don't match.
- Concrete example:
- `A tensor` is a Float8Tensor with shape (1,8192,2048) and scale of shape (1,8192,1) when a matmul op is called in torchao [here](https://github.com/pytorch/ao/blob/8706d3f3b087b876d625c720e98236c265c0ba98/torchao/float8/float8_linear.py#L70). Torchao does a reshape -> scaled mm -> reshape [here](https://github.com/pytorch/ao/blob/ed361ff5c7dd33aba9b4a0da2bd744de5a5debfb/torchao/float8/float8_linear.py#L122). When a Float8Tensor is reshaped, its scale is reshaped along with it [here](https://github.com/pytorch/ao/blob/8706d3f3b087b876d625c720e98236c265c0ba98/torchao/float8/float8_ops.py#L152). So the first reshape makes the "A tensor" (1,8192,2048) => (8192,2048) and the scale (1,8192,1) => (8192,1).
- During post grad pass in async TP:
- `A_node` has shape (1,8192,2048) (tensor from before this [reshape](https://github.com/pytorch/ao/blob/ed361ff5c7dd33aba9b4a0da2bd744de5a5debfb/torchao/float8/float8_linear.py#L122))
- `A_scale` has shape (8192,1) (due to reshape op above, which caused the scale to be reshaped from (1,8192,1) => (8192,1)).
- Solution: custom op in this PR detects A_node ndims != A_scale ndims, and tries to calculate a way to reshape the scale to match ndims of A_node. It converts (8192,1) -> (1,8192,1), which is compatible with the A_node dims.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,526,905
|
[WIP][Intel GPU][do not merge] Enable SDPA on XPU
|
DDEle
|
closed
|
[
"module: cpu",
"module: mkldnn",
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 5
|
CONTRIBUTOR
|
TEST ONLY
For XPU SDPA...
```
git reset --hard origin/viable/strict
git merge yanbing-j/yanbing/upgrade_onednn_v3.7
git merge DDEle/onednn_graph_sdpa-integration
new commits to be tested...
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,415,709
|
DISABLED test_inductor_reuse_buffer_after_inplace_collective (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d",
"oncall: pt2"
] | 19
|
NONE
|
Platforms: inductor, rocm, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_reuse_buffer_after_inplace_collective&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37828763667).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_reuse_buffer_after_inplace_collective`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/test_c10d_functional_native.py`
cc @clee2000 @wdvr @chauhang @penguinwu
| true
|
2,880,415,440
|
DISABLED test_reorder_peak_memory_bfs (__main__.TestOperatorReorderForPeakMemory)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reorder_peak_memory_bfs&suite=TestOperatorReorderForPeakMemory&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37828755697).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 5 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reorder_peak_memory_bfs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_memory.py", line 157, in test_reorder_peak_memory_bfs
.run(code)
RuntimeError: Expected to find "buf2 = " but did not find it
Searched string:
stream0 = get_raw_stream(0)
triton_poi_fused_mm_0.run(primals_5, buf4, 12, grid=grid(12), stream=stream0)
buf1 = empty_strided_cuda((2048, 12), (12, 1), torch.float32)
# Topologically Sorted Source Nodes: [t1], Original ATen: [aten.mm]
extern_kernels.mm(primals_2, buf0, out=buf1)
del buf0
buf5 = empty_strided_cuda((2048, 12), (12, 1), torch.float32)
# Topologically Sorted Source Nodes: [t4], Original ATen: [aten.mm]
extern_kernels.mm(buf2, buf4, out=buf5)
del buf4
buf3 = empty_strided_cuda((2048, 1), (1, 1), torch.float32)
# Topologically Sorted Source Nodes: [t3], Original ATen: [aten.mm]
extern_kernels.mm(reinterpret_tensor(buf1, (2048, 10), (12, 1), 0), primals_4, out=buf3)
buf7 = empty_strided_cuda((3, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [sum_2], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_1.run(buf5, buf7, 3, 6827, grid=grid(3), stream=stream0)
del buf5
buf6 = empty_strided_cuda((), (), torch.float32)
# Topologically Sorted Source Nodes: [sum_1], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_2.run(buf3, buf6, 1, 2048, grid=grid(1), stream=stream0)
del buf3
buf9 = buf6; del buf6 # reuse
# Topologically Sorted Source Nodes: [sum_2, add], Original ATen: [aten.sum, aten.add]
stream0 = get_raw_stream(0)
triton_per_fused_add_sum_3.run(buf9, buf7, 1, 3, grid=grid(1), stream=stream0)
del buf7
return (buf9, primals_2, reinterpret_tensor(buf2, (1, 2048), (1, 1), 0), reinterpret_tensor(primals_5, (10, 1), (1, 10), 0), reinterpret_tensor(buf1, (10, 2048), (1, 12), 0), reinterpret_tensor(primals_4, (1, 10), (1, 1), 0), )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
primals_1 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
primals_2 = rand_strided((2048, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_3 = rand_strided((1, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_4 = rand_strided((10, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_5 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
fn = lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: buf2 =
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_memory.py TestOperatorReorderForPeakMemory.test_reorder_peak_memory_bfs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_memory.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,409,623
|
[MPS] Add support for `entr()` in eager.
|
dcci
|
closed
|
[
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor"
] | 5
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,326,541
|
Fix the benchmark config name from H100 benchmark
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
When using the wrong benchmark configs, the benchmark jobs will be skipped. The name should have the `_cuda_h100` suffix as used in the test matrix.
| true
|
2,880,323,644
|
torch.compile supported with GIL disabled
|
shiyang-weng
|
open
|
[
"triaged",
"module: python frontend",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
I want to try python3.13t to get better performance.
But found https://github.com/pytorch/pytorch/blob/acca9b9cb0a38b2eb1bfd5fe0aaff3760dd77812/torch/_dynamo/eval_frame.py#L827C61-L827C78
"torch.compile is not supported on Python built with GIL disabled"
Is there any plan to support it?
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,880,290,159
|
[test][do not merge ]Upgrade oneDNN to v3.7 (23)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,289,402
|
[test][do not merge]Upgrade oneDNN to v3.7(22)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,288,961
|
[test][do not merge]Upgrade oneDNN to v3.7(21)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,288,287
|
[test][do not merge]Upgrade oneDNN to v3.7(20)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,287,576
|
[test][do not merge]Upgrade oneDNN to v3.7 (19)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,286,730
|
[test][do not merge]Upgrade oneDNN to v3.7(18)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,286,024
|
[test][do not merge]Upgrade oneDNN to v3.7(17)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,285,322
|
[test][do not merge]Upgrade oneDNN to v3.7(16)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,284,937
|
[test][do not merge]Upgrade oneDNN to v3.7(15)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,284,242
|
[test][do not merge ]Upgrade oneDNN to v3.7 (14)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,283,431
|
[test][do not merge]Upgrade oneDNN to v3.7(13)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,282,719
|
[test][do not merge]Upgrade oneDNN to v3.7(12)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.