id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,938,657,074
|
stride asserts should name the operator involved
|
zou3519
|
open
|
[
"high priority",
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: inductor",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
```
File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_inductor/utils.py", line 2348, in run
return model(new_inputs)
File "/tmp/torchinductor_nobody/27/c2765fyur2v7aek4rc762oibztfzekpdgupovpfnad463vcqmrtj.py", line 205, in call
assert_size_stride(primals_1, (768, 192), (192, 1))
AssertionError: expected size 1024==768, stride 192==192 at dim=0
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
```
pitch: we can probably pass the name of the operator to assert_size_stride.
I have debugged 3 of these in the last week
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov @bdhirsh
| true
|
2,938,627,366
|
Do not depend on numpy during the import
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
But a good followup would be to use torch primitives instead of numpy here
Fixes https://github.com/pytorch/pytorch/issues/149681
Test plan: Monkey-patch 2.7.0-rc and run `python -c "import torch;print(torch.compile(lambda x:x.sin() + x.cos())(torch.rand(32)))"`
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,938,545,347
|
[MPS] Replace indexed with strided flavor
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149730
Which renders non-contiguous operations much faster for larger tensors, for example `fmax` of 1000x1000 strides tensors takes 270ms with new algorithm and 430ms with an old one, that needed additional tensor of 3e6 elements to function.
TODO: Add 64-bit indexing logic, as current implementation has the same limitation as `generateKernelDataOffsets`
| true
|
2,938,545,174
|
[MPS][BE] Get rid of `supports_dense` flag
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149730
* __->__ #149729
* #149728
* #149727
As now all binary ops supports dense
| true
|
2,938,467,955
|
[MPS][BE] Migrate complex_mul to tensor iterator
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149730
* #149729
* __->__ #149728
* #149727
| true
|
2,938,467,787
|
[MPS][BE] Migrate `torch.complex` to binary_functor
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"release notes: mps",
"ciflow/mps"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149730
* #149729
* #149728
* __->__ #149727
As it's very similar in nature to `torch.polar`
Though rename kernel from `complex_kernel` to `make_complex`
| true
|
2,938,441,521
|
SDPA gives different outputs compared to manual attention with `dropout>0.0`
|
abdulfatir
|
closed
|
[
"triaged",
"module: random",
"module: sdpa"
] | 3
|
NONE
|
### 🐛 Describe the bug
SDPA gives different outputs compared to manual attention when the `EFFICIENT_ATTENTION` backend is used and dropout is non-zero. Is this expected? Is the efficient kernel using a different RNG?
Here's an MWE:
```py
from torch.nn.functional import scaled_dot_product_attention
from torch.nn.attention import SDPBackend, sdpa_kernel
def manual_attention(query, key, value, mask, dropout=0.0):
scores = torch.matmul(query, key.transpose(3, 2))
scores += mask
attn_weights = torch.nn.functional.softmax(scores.float(), dim=-1).type_as(scores)
attn_weights = torch.nn.functional.dropout(attn_weights, p=dropout, training=True)
attn_output = torch.matmul(attn_weights, value)
return attn_output
def compare(query, key, value, mask, dropout=0.0, backends: list = []):
torch.manual_seed(0)
manual_result = manual_attention(query, key, value, mask=mask, dropout=dropout)
torch.manual_seed(0)
with sdpa_kernel(backends=backends):
sdpa_result = scaled_dot_product_attention(
query, key, value, attn_mask=mask, is_causal=False, dropout_p=dropout, scale=1.0
)
return torch.abs(manual_result - sdpa_result).mean()
torch.manual_seed(0)
query = torch.randn(2, 3, 4, 8, device="cuda:0")
key = torch.randn(2, 3, 4, 8, device="cuda:0")
value = torch.randn(2, 3, 4, 8, device="cuda:0")
mask = torch.where(torch.rand(2, 1, 4, 4, device="cuda:0") > 0.5, 0.0, -float("inf"))
print(compare(query, key, value, mask=mask, dropout=0.0, backends=[SDPBackend.EFFICIENT_ATTENTION])) # tensor(1.0005e-07, device='cuda:0')
print(compare(query, key, value, mask=mask, dropout=0.5, backends=[SDPBackend.EFFICIENT_ATTENTION])) # tensor(0.9543, device='cuda:0')
print(compare(query, key, value, mask=mask, dropout=0.0, backends=[SDPBackend.MATH])) # tensor(0., device='cuda:0')
print(compare(query, key, value, mask=mask, dropout=0.5, backends=[SDPBackend.MATH])) # tensor(0., device='cuda:0')
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @pbelevich
| true
|
2,938,232,466
|
`torch.compile` does not work when `set_priority` is specified in `sdpa_kernel`
|
abdulfatir
|
closed
|
[
"oncall: pt2",
"module: sdpa"
] | 2
|
NONE
|
### 🐛 Describe the bug
Model compilation does not work when the `set_priority` kwarg is provied to the `sdpa_kernel` context manager. See example below.
```py
import torch
from torch.nn.attention import SDPBackend, sdpa_kernel
from torch.nn.functional import scaled_dot_product_attention
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.o = torch.nn.Linear(64, 128)
def forward(self, q, k, v, mask):
with sdpa_kernel(backends=[SDPBackend.EFFICIENT_ATTENTION, SDPBackend.MATH], set_priority=True):
out = scaled_dot_product_attention(
query=q,
key=k,
value=v,
attn_mask=mask,
is_causal=False,
scale=1.0,
)
return out
model = Model().to("cuda:0")
model = torch.compile(model)
q = torch.randn(32, 1, 10, 64).to("cuda:0")
k = torch.randn(32, 1, 6, 64).to("cuda:0")
v = torch.randn(32, 1, 6, 64).to("cuda:0")
mask = torch.ones(32, 1, 10, 6).to("cuda:0")
model(q, k, v, mask)
```
Fails with:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[2], line 32
29 v = torch.randn(32, 1, 6, 64).to("cuda:0")
30 mask = torch.ones(32, 1, 10, 6).to("cuda:0")
---> 32 model(q, k, v, mask)
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/nn/modules/module.py:1739, in Module._wrapped_call_impl(self, *args, **kwargs)
1737 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1738 else:
-> 1739 return self._call_impl(*args, **kwargs)
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/nn/modules/module.py:1750, in Module._call_impl(self, *args, **kwargs)
1745 # If we don't have any hooks, we want to skip the rest of the logic in
1746 # this function, and just call forward.
1747 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1748 or _global_backward_pre_hooks or _global_backward_hooks
1749 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1750 return forward_call(*args, **kwargs)
1752 result = None
1753 called_always_called_hooks = set()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py:574, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
569 saved_dynamic_layer_stack_depth = (
570 torch._C._functorch.get_dynamic_layer_stack_depth()
571 )
573 try:
--> 574 return fn(*args, **kwargs)
575 finally:
576 # Restore the dynamic layer stack depth if necessary.
577 torch._C._functorch.pop_dynamic_layer_stack_and_undo_to_depth(
578 saved_dynamic_layer_stack_depth
579 )
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/nn/modules/module.py:1739, in Module._wrapped_call_impl(self, *args, **kwargs)
1737 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1738 else:
-> 1739 return self._call_impl(*args, **kwargs)
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/nn/modules/module.py:1750, in Module._call_impl(self, *args, **kwargs)
1745 # If we don't have any hooks, we want to skip the rest of the logic in
1746 # this function, and just call forward.
1747 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1748 or _global_backward_pre_hooks or _global_backward_hooks
1749 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1750 return forward_call(*args, **kwargs)
1752 result = None
1753 called_always_called_hooks = set()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:1380, in CatchErrorsWrapper.__call__(self, frame, cache_entry, frame_state)
1374 return hijacked_callback(
1375 frame, cache_entry, self.hooks, frame_state
1376 )
1378 with compile_lock, _disable_current_modes():
1379 # skip=1: skip this frame
-> 1380 return self._torchdynamo_orig_callable(
1381 frame, cache_entry, self.hooks, frame_state, skip=1
1382 )
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:1164, in ConvertFrame.__call__(self, frame, cache_entry, hooks, frame_state, skip)
1162 counters["frames"]["total"] += 1
1163 try:
-> 1164 result = self._inner_convert(
1165 frame, cache_entry, hooks, frame_state, skip=skip + 1
1166 )
1167 counters["frames"]["ok"] += 1
1168 return result
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:547, in ConvertFrameAssert.__call__(self, frame, cache_entry, hooks, frame_state, skip)
544 dynamo_tls.traced_frame_infos.append(info)
546 with compile_context(CompileContext(compile_id)):
--> 547 return _compile(
548 frame.f_code,
549 frame.f_globals,
550 frame.f_locals,
551 frame.f_builtins,
552 frame.closure,
553 self._torchdynamo_orig_callable,
554 self._one_graph,
555 self._export,
556 self._export_constraints,
557 hooks,
558 cache_entry,
559 cache_size,
560 frame,
561 frame_state=frame_state,
562 compile_id=compile_id,
563 skip=skip + 1,
564 )
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:986, in _compile(code, globals, locals, builtins, closure, compiler_fn, one_graph, export, export_constraints, hooks, cache_entry, cache_size, frame, frame_state, compile_id, skip)
984 guarded_code = None
985 try:
--> 986 guarded_code = compile_inner(code, one_graph, hooks, transform)
988 # NB: We only put_code_state in success case. Success case here
989 # does include graph breaks; specifically, if a graph break still
990 # resulted in a partially compiled graph, we WILL return here. An
(...)
995 # to upload for graph break though, because this can prevent
996 # extra graph break compilations.)
997 put_code_state()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:715, in _compile.<locals>.compile_inner(code, one_graph, hooks, transform)
713 stack.enter_context(torch._dynamo.callback_handler.install_callbacks())
714 stack.enter_context(CompileTimeInstructionCounter.record())
--> 715 return _compile_inner(code, one_graph, hooks, transform)
717 return None
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_utils_internal.py:95, in compile_time_strobelight_meta.<locals>.compile_time_strobelight_meta_inner.<locals>.wrapper_function(*args, **kwargs)
92 kwargs["skip"] = skip + 1
94 if not StrobelightCompileTimeProfiler.enabled:
---> 95 return function(*args, **kwargs)
97 return StrobelightCompileTimeProfiler.profile_compile_time(
98 function, phase_name, *args, **kwargs
99 )
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:750, in _compile.<locals>._compile_inner(code, one_graph, hooks, transform)
748 CompileContext.get().attempt = attempt
749 try:
--> 750 out_code = transform_code_object(code, transform)
751 break
752 except exc.RestartAnalysis as e:
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py:1361, in transform_code_object(code, transformations, safe)
1358 instructions = cleaned_instructions(code, safe)
1359 propagate_line_nums(instructions)
-> 1361 transformations(instructions, code_options)
1362 return clean_and_assemble_instructions(instructions, keys, code_options)[1]
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:231, in preserve_global_state.<locals>._fn(*args, **kwargs)
229 exit_stack.enter_context(torch_function_mode_stack_state_mgr)
230 try:
--> 231 return fn(*args, **kwargs)
232 finally:
233 cleanup.close()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py:662, in _compile.<locals>.transform(instructions, code_options)
660 try:
661 with tracing(tracer.output.tracing_context), tracer.set_current_tx():
--> 662 tracer.run()
663 except exc.UnspecializeRestartAnalysis:
664 speculation_log.clear()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2868, in InstructionTranslator.run(self)
2867 def run(self):
-> 2868 super().run()
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:1052, in InstructionTranslatorBase.run(self)
1050 try:
1051 self.output.push_tx(self)
-> 1052 while self.step():
1053 pass
1054 except TensorifyScalarRestartAnalysis:
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:962, in InstructionTranslatorBase.step(self)
959 self.update_block_stack(inst)
961 try:
--> 962 self.dispatch_table[inst.opcode](self, inst)
963 return not self.output.should_exit
964 except TensorifyScalarRestartAnalysis:
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:659, in break_graph_if_unsupported.<locals>.decorator.<locals>.wrapper(self, inst)
657 return handle_graph_break(self, inst, speculation.reason)
658 try:
--> 659 return inner_fn(self, inst)
660 except Unsupported as excp:
661 if self.generic_context_manager_depth > 0:
662 # We don't support graph break under GenericContextWrappingVariable,
663 # If there is, we roll back to the checkpoint and fall back.
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2341, in InstructionTranslatorBase.CALL(self, inst)
2339 @break_graph_if_unsupported(push=1)
2340 def CALL(self, inst):
-> 2341 self._call(inst)
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:2335, in InstructionTranslatorBase._call(self, inst, call_kw)
2330 kwargs = {}
2332 try:
2333 # if call_function fails, need to set kw_names to None, otherwise
2334 # a subsequent call may have self.kw_names set to an old value
-> 2335 self.call_function(fn, args, kwargs)
2336 finally:
2337 self.kw_names = None
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py:897, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
895 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
896 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 897 self.push(fn.call_function(self, args, kwargs))
File /fsx/ansarnd/miniconda3/envs/chronos/lib/python3.11/site-packages/torch/_dynamo/variables/torch.py:352, in TorchCtxManagerClassVariable.call_function(self, tx, args, kwargs)
348 return FSDPParamGroupUseTrainingStateVariable.create(
349 tx, args[0], args[1].as_python_constant()
350 )
351 elif self.value is torch.nn.attention.sdpa_kernel:
--> 352 assert len(args) == 1 or (len(kwargs) == 1 and "backends" in kwargs)
353 backends = args[0] if len(args) == 1 else kwargs["backends"]
354 return SDPAKernelVariable.create(tx, backends.as_python_constant())
AssertionError:
from user code:
File "/tmp/ipykernel_3180308/3479898774.py", line 12, in forward
with sdpa_kernel(backends=[SDPBackend.EFFICIENT_ATTENTION, SDPBackend.MATH], set_priority=True):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,938,194,525
|
`torch.linalg.ldl_factor_ex`, `torch.linalg.ldl_factor`, and `torch.linalg.lstsq` Raise INTERNAL ASSERT FAILED
|
vwrewsge
|
open
|
[
"module: error checking",
"triaged",
"module: linear algebra"
] | 0
|
NONE
|
### 🐛 Describe the bug
# Bug 1
Code:
```
import torch
from torch.linalg import ldl_factor_ex
A = torch.eye(3, 3)
A[-1, -1] = 0
ldl_factor_ex(A, hermitian=True, check_errors=True)
```
Output:
```
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1630, please report a bug to PyTorch. torch.linalg.ldl_factor_ex: Unknown error code: 3.
```
# Bug 2
Code:
```
import torch
A = torch.eye(3, dtype=torch.float64)
A[-1, -1] = 0
LD, pivots = torch.linalg.ldl_factor(A)
```
Output:
```
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1630, please report a bug to PyTorch. torch.linalg.ldl_factor: Unknown error code: 3.
```
# Bug 3
Code:
```
import torch
A = torch.randn(3, 3)
B = torch.randn(3, 2)
A[0, 0] = float('nan')
result = torch.linalg.lstsq(A, B, driver='gelsd')
```
Output:
```
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1601, please report a bug to PyTorch.
```
### Versions
torch 2.6.0
cc @malfet @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,938,186,846
|
[ONNX][verification] `find_mismatch` Raises `INTERNAL ASSERT FAILED`
|
vwrewsge
|
closed
|
[
"module: onnx",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
Code
```
import torch
import torch.onnx
import torch.jit
from torch import nn, Tensor
import io
from torch.onnx.verification import find_mismatch
class Model(nn.Module):
def __init__(self):
super().__init__()
self.module = nn.Linear(8, 4)
self.module2 = nn.Linear(4, 2)
def forward(self, x: Tensor) -> Tensor:
preout = self.module(x)
out = self.module2(preout)
return out
model = Model()
scripted_model = torch.jit.script(model)
dummy_input = torch.randn(3, 8)
opset_version = 9
graph_info = find_mismatch(model=scripted_model,
input_args=(dummy_input,),
opset_version=opset_version,
verbose=False)
```
Output:
```
RuntimeError: isObject() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/core/ivalue_inl.h":1696, please report a bug to PyTorch. Expected Object but got None
```
### Versions
torch 2.6.0
| true
|
2,938,113,438
|
`capturable` should express consistent with the message on `torch.optim.RMSprop()` and `torch.optim.AdamW()`
|
ILCSFNO
|
closed
|
[
"module: docs",
"module: optimizer",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 📚 The doc issue
The docs of [torch.optim.RMSprop()](https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html#torch.optim.RMSprop) and [torch.optim.AdamW()](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html#torch.optim.AdamW) show their shared description as below:
https://github.com/pytorch/pytorch/blob/d072254eaea325a507c1498431e4c8294205fe2d/torch/optim/rmsprop.py#L255
https://github.com/pytorch/pytorch/blob/d072254eaea325a507c1498431e4c8294205fe2d/torch/optim/adamw.py#L115
https://github.com/pytorch/pytorch/blob/d072254eaea325a507c1498431e4c8294205fe2d/torch/optim/optimizer.py#L272-L275
It shows that the parameter `capturable` only controls `cuda` graph capture compatibility.
But through the repro below, it will raise error:
### Repro
```python
import torch
import torch.nn as nn
input_data = torch.randn(1, 10)
model = nn.Linear(10, 1)
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.01, capturable=True)
# optimizer = torch.optim.AdamW(model.parameters(), lr=0.01, capturable=True)
optimizer.zero_grad()
criterion = nn.MSELoss()
for epoch in range(100):
optimizer.zero_grad()
outputs = model(input_data)
loss = criterion(outputs, input_data)
loss.backward()
optimizer.step()
print(f'Epoch [{(epoch + 1)}/{100}], Loss: {loss.item():.4f}')
```
### Output
```txt
AssertionError: If capturable=True, params and state_steps must be on supported devices: ['cuda', 'xpu', 'hpu', 'privateuseone', 'xla'].
```
The devices in the output message come from:
https://github.com/pytorch/pytorch/blob/d072254eaea325a507c1498431e4c8294205fe2d/torch/optim/optimizer.py#L217-L224
which is not consistent with the description of `capturable`.
In all, I accept that it should raise error, but may the documentation could express more about this.
That is, the parameter `capturable` not only controls `cuda` graph capture compatibility, but also `xpu/hpu/privateuseone/xla`.
Suggestions showed below in detail.
Thanks for noting!
### Suggest a potential alternative/fix
I suggest change the description of `capturable`:
from:
https://github.com/pytorch/pytorch/blob/d072254eaea325a507c1498431e4c8294205fe2d/torch/optim/optimizer.py#L272-L275
to:
```python
_capturable_doc = r"""capturable (bool, optional): whether this instance is safe to
capture in a graph from device cuda/xpu/hpu/privateuseone/xla.
Passing True can impair ungraphed performance,
so if you don't intend to graph capture this instance, leave it False
(default: False)"""
```
cc @svekars @sekyondaMeta @AlannaBurke @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,938,016,411
|
Let pointwise sharding take arg with largest number of dims in case of ties
|
fmassa
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
MEMBER
|
Before, we would take the first argument with the largest number of shards, regardless if it had fewer dims than another arg with the same number of shards but more dimensions. This would lead to potentially fewer sharding options
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,937,934,168
|
Implement `permute` for masked tensor
|
JackCaster
|
open
|
[
"triaged",
"module: nestedtensor"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
I have a RNN-like module, which reads the input step by step in for-loop. The batch contains sequences of different length, which are therefore padded. I would like to ignore the padding throughout the various layers in the network. I thought I could use masked tensors! But, the `permute` function I need is not yet implemented:
> /opt/conda/envs/project/lib/python3.11/site-packages/torch/masked/maskedtensor/core.py:322: UserWarning: permute is not implemented in __torch_dispatch__ for MaskedTensor.
If you would like this operator to be supported, please file an issue for a feature request at https://github.com/pytorch/maskedtensor/issues with a minimal reproducible code snippet.
In the case that the semantics for the operator are not trivial, it would be appreciated to also include a proposal for the semantics.
I open the request here as the GH for maskedtensor is archived.
Or, is there an alternative to ignore padding at a core level? I know RNN can accept packed sequences natively, but linear layers and so on cannot.
### Alternatives
_No response_
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,937,827,269
|
Utility function to get the best available device
|
Halyjo
|
closed
|
[
"triaged",
"module: accelerator"
] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
# Utility function to get best available device
A piece of code I need in all my projects is a function that simply checks which devices are available and selects the best available option. Could this be a pytorch utility function?
**The function I usually use for this:**
```python
def get_best_device(priority = ("cuda", "mps", "cpu")):
"""Returns the best available device from a priority list.
Args:
priority_list (list): List of device names in decreasing priority.
Returns:
torch.device: The best available device.
Raises:
ValueError: If no suitable device is found from the priority list.
"""
for device_name in priority_list:
if device_name == "cuda" and torch.cuda.is_available():
return torch.device("cuda")
elif device_name == "mps" and torch.backends.mps.is_available():
return torch.device("mps")
elif device_name == "cpu":
return torch.device("cpu")
raise ValueError("No suitable device found from the priority list.")
```
**Questions:**
1. Should this function (in any form) be part of pytorch?
1. Does this already exist in Pytorch or neighboring libraries?
2. Are there any reasons it's not included or should not be included?
If yes to 1:
1. The priority tuple of devices only contain the once I have access to normally. Should other be added? (looking at `pytorch.backends`, I imagine yes, but I don't know the other options).
2. Where in the library should it be? Somewhere in `pytorch.backends`?
TODO if function is to be included:
- [ ] Add type hints
- [ ] Validation of priority list. Any acceptable device type should probably be fine in the list.
I would appreciate any thoughts, advice and feedback :)
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @guangyey @EikanWang
| true
|
2,937,748,788
|
Constraints for distributions with mixed support
|
sethaxen
|
open
|
[
"module: distributions",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
I'd like to implement a joint distribution of both discrete and continuous parameters and would like to be able to define a constraint that indicates that the support is mixed and which parameters are continuous. A potential use-case is representing an approximate posterior distribution (with continuous _and_discrete parameters) with a distribution to use it as a prior when new data comes in in a Pyro model.
`Distributions.support` is required to be a child of `Constraint`, and `Constraint.is_discrete` returns a `bool`. `constraints._Cat`, for example, returns `True` for `is_discrete` if _any_ of the component constraints are discrete. i.e. PyTorch treats distributions with discrete support as having a support that is entirely discrete.
I can think of 2 solutions:
1. Allow `Constraint.is_discrete` to return a `Tensor` of `Bool` indicating which parameters would have which constraints.
2. Allow `Distribution.support` to return a `Tensor` of `Constraint` indicating which parameters would have which constraints.
### Alternatives
_No response_
### Additional context
_No response_
cc @fritzo @neerajprad @alicanb @nikitaved
| true
|
2,937,676,192
|
[export] Save unflattened gm
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: export",
"ci-no-td"
] | 14
|
CONTRIBUTOR
|
Test Plan: CI
Differential Revision: D71082652
| true
|
2,937,661,329
|
[Inductor] Cache CUDA compilation errors
|
kadeng
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15
|
CONTRIBUTOR
|
Summary: Add support for caching of CUDA (nvcc) compilation errors to codecache.py
Test Plan: CI ( for example Cutlass backend unit tests )
Reviewed By: ColinPeppler
Differential Revision: D71562040
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,937,613,142
|
[LNL][Windows][Inductor] Application error: The memory could not be read.
|
libohao1201
|
closed
|
[] | 0
|
NONE
|
### 🐛 Describe the bug
When running E2E inductor on LNL, the following error appears randomly:

### Versions
- stock pytorch :
- pip install torch --index-url https://download.pytorch.org/whl/test/xpu
- git clone https://github.com/pytorch/pytorch.git
- git checkout b1940b5867e40e40ebdce4db76f76d3d0b71d3f4
- torch-xpu-ops: Commit (pin) - 026b2c8c7c92a7b2cec5d26334006e3423251cc6
- Driver: 32.0.101.6647
| true
|
2,937,606,489
|
[XPU] Update triton commit to fix to fix level_zero not found by env var LEVEL_ZERO_V1_SDK_PATH.
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/inductor",
"ciflow/xpu"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149511
| true
|
2,937,577,909
|
[Windows][Inductor] Invalid include path for cl.exe.
|
etaf
|
closed
|
[
"module: windows",
"triaged"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
When running
`python benchmarks/dynamo/huggingface.py --accuracy --amp --amp-dtype bfloat16 -dxpu -n1 --inference --backend inductor --only XLNetLMHeadModel`
I met the following error:
```
torch._inductor.exc.InductorError: CppCompileError: C++ compile error
Command:
cl /I C:/Program Files/WindowsApps/PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0/Include /I C:/Users/sdp/pt27_ww10_rc1/lib/site-packages/torch/include /I C:/Users/sdp/pt27_ww10_rc1/lib/site-packages/torch/include/torch/csrc/api/include /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/sdp/AppData/Local/Temp/tmpj2g4v2bc/h7/ch7vpvqewski37q3govb6bj4mosyzcj26s7ujuk5yw253ijv2fcx.cpp /LD /FeC:/Users/sdp/AppData/Local/Temp/tmpj2g4v2bc/h7/ch7vpvqewski37q3govb6bj4mosyzcj26s7ujuk5yw253ijv2fcx.pyd /link /LIBPATH:C:/Users/sdp/pt27_ww10_rc1/Scripts/libs /LIBPATH:C:/Users/sdp/pt27_ww10_rc1/lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib
```
Root cause: there is an space in path `C:/Program Files/WindowsApps/PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0/Include`, we should add `""` for the path.
### Versions
Pytorch 2.7 release branch.
| true
|
2,937,538,099
|
[dynamo] Ensure placeholder name is not an intermediate node name
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149758
* __->__ #149712
Fixes https://fb.workplace.com/groups/1075192433118967/permalink/1615671879071017/
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,937,521,060
|
[Dynamo] Clean up old torch function flag
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 16
|
CONTRIBUTOR
|
This is tracked via `SymbolicTorchFunctionState` now.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,937,479,334
|
pretty print graph signature
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149710
Fixes #141243
Differential Revision: [D71604218](https://our.internmc.facebook.com/intern/diff/D71604218/)
| true
|
2,937,461,518
|
[ca] API comments and support dynamic shapes via configs
|
xmfan
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149784
* #149773
* #149651
* __->__ #149709
* #149647
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,937,418,484
|
[Quant][PT2E] add a lowering pass for x86 backend
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"intel"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149708
**Summary**
This PR adds a lowering pass for x86 backend
- Patterns of `dequantize -> conv/linear (-> quantize)` are fused to corresponding quantized onednn ops.
- Weights are prepacked ahead of time.
- Post ops of conv/linear are fused if supported.
- The pass returns a `GraphModule` with the modifications mentioned above.
**Test plan**
```
pytest test/quantization/pt2e/test_x86inductor_quantizer.py -k test_lowering_to_x86
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,937,359,439
|
[aot] mark dynamic activations as maybe dynamic
|
xmfan
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: pt2-dispatcher"
] | 7
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152633
* #152119
* #151962
* #151731
* #151860
* __->__ #149707
Today, we mark graph outputs as maybe dynamic, this lets a compilation to communicate to future compilations whether certain graph inputs are dynamic. Similarly, we can do this to saved activations, which may be used in future compilations as well. This is especially prevalent in compiled autograd, where tensor activations will always become graph inputs.
Changes to the tests were mainly cosmetic, with the exception of tests that relied on duck shaping. By annotating tensor dims, we prevent them from reusing pre-existing symbols, so this change will make graphs use duck shapes less than before, which affects some of the caching tests.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @zou3519 @bdhirsh
| true
|
2,937,359,354
|
[ca] torch.compile API comments and support older dynamic shapes API used in benchmarks
|
xmfan
|
open
|
[
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,937,337,272
|
[MPS] Add support for scaled_modified_bessel_k0 for eager.
|
dcci
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 4
|
MEMBER
| null | true
|
2,937,317,870
|
[MPS] Add inline to function definition.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps"
] | 4
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,937,315,874
|
[XPU][Inductor] Failed to run max-autotune in subprocess.
|
etaf
|
open
|
[
"triaged",
"module: xpu"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
Currently the max-autotune in subprocess on XPU get `RuntimeError: _share_fd_: only available on CPU`
```
python test/inductor/test_max_autotune.py TestMaxAutotune.test_benchmark_choice_in_subproc
ERROR: test_benchmark_choice_in_subproc (__main__.TestMaxAutotune)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/home/xinanlin/xinanlin/pytorch/test/inductor/test_max_autotune.py", line 121, in test_benchmark_choice_in_subproc
child.start()
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/multiprocessing/context.py", line 288, in _Popen
return Popen(process_obj)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 618, in reduce_storage
fd, size = storage._share_fd_cpu_()
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/site-packages/torch/storage.py", line 447, in wrapper
return fn(self, *args, **kwargs)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/site-packages/torch/storage.py", line 522, in _share_fd_cpu_
return super()._share_fd_cpu_(*args, **kwargs)
RuntimeError: _share_fd_: only available on CPU
To execute this test, run the following from the base repo dir:
python test/inductor/test_max_autotune.py TestMaxAutotune.test_benchmark_choice_in_subproc
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
### Versions
PyTorch version: 2.8.0a0+gitca71904
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,937,271,001
|
[PT2] Port use_triton_lce to PT2 pre_grad passes
|
huxintong
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary:
`use_triton_lce_replace_simple_LCE` and `use_triton_lce_replace_normal_LCE`
code is mostly the same, some minor changes to support aten IR
Test Plan:
```
scripts/aetk/aetk -L
%run ~/fbsource/fbcode/caffe2/test/inductor/fb/test_customized_triton_kernel_passes.py
```
will verify the qps after everything done in the stack
Reviewed By: frank-wei
Differential Revision: D68909857
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,937,249,070
|
[TP] add support for fused QKV Sharding
|
wanchaol
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 7
|
COLLABORATOR
|
This PR adds fused QKV sharding in the TP layer. There should be no "strided" sharding involved as fused QKV linear layer is more about combining three layers into one.
See design and discussions: https://github.com/pytorch/pytorch/issues/140069#issuecomment-2683153303
resolves https://github.com/pytorch/pytorch/issues/140069
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,937,241,411
|
Improve subproc autotuning implementation
|
masnesral
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149890
* __->__ #149700
Summary: The primary change is to update the autotune-in-a-subproc implementation to avoid using multiprocessing spawn. Spawn (re)executes the toplevel script in the subproc, which can be problematic. The approach here is similar to Triton parallel compile: we Popen a subproc on a controlled entry point and communicate over pipes. That change drove a lot of refactoring in the TuningProcess class, so I took the opportunity to simplify some things, rename some methods, etc.
One other notable change is around the timeout / kill approach. After a timeout, we were previously attempting to stop the subproc in three steps (graceful shutdown, sigkill if graceful fails, sigterm if sigkill fails). I'm gonna argue think that's not useful: 1) The graceful shutdown is never going to work unless the subproc happens to have just completed its task and is ready to receive the next command. 2) If we're going to kill the subproc, let's just take the most aggressive approach and move on as quickly as possible to restarting it rather than waiting to see if previous shutdown attempts succeeded. The only downside that I can find find is maybe a little log spew?, e.g., ` ResourceWarning: subprocess 2987680 is still running`
List of changes:
* Use Popen instead of spawn for the autotuning subprocess.
* Introduced a new entry point `__autotune_main__.py`
* Renamed some TuningProcess methods. For example `shutdown` makes more sense than `terminate` because the latter implies a forced kill.
* Simplified the implementation around benchmarking timeout and how we kill the subproc after a timeout.
* Deprecated the unused timeout configs in `_inductor/config.py`
* Moved `get_ld_library_path` helper to a common utils file.
* Added more unit tests for subproc crashes / timeouts / exceptions, etc.
Test plan:
* New unit tests
* Also ran internally with all combinations of: build mode `opt` and `dev-nosan`, and `buck run` vs. executing the `.par` file directly.
* Made sure the functionality to parallelize autotuning across different GPUs is working (it wasn't clear to me this was behaving the way we wanted it to).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D71976971](https://our.internmc.facebook.com/intern/diff/D71976971)
| true
|
2,937,204,247
|
Supporting non-tensor-data write_size in planner write items.
|
pradeepfn
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 5
|
CONTRIBUTOR
|
Summary:
1\ The current write item structure does not contain the amount of data that needs to be written.
2\ the planner.item already has a size primitive 'tensor_storage_size'. https://fburl.com/code/7a0gsmw7 But only for tensors.
3\ Right now, the only way the writer layer get hold of this property (fro non tensor data)
first do a lookup in to the actual tensor/bytes
then calculate the nbytes.
This change introduce a way to capture non-tensor data size within a write-plan item.
Test Plan: Existing UT.
Differential Revision: D71599725
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC
| true
|
2,937,195,041
|
Fix subclass access custom op bug
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Summary: When we call torch.inference_mode, we seem to skip Autograd key causing the custom op export uses to be not decomposed properly before subclass dispatching starts. We fix this by force desugaring this op at Python key
Test Plan: test
Differential Revision: D71599541
| true
|
2,937,155,258
|
Inductor logging + analysis of torch.profile
|
exclamaforte
|
open
|
[
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"suppress-bc-linter"
] | 6
|
CONTRIBUTOR
|
Prereqs:
- https://github.com/pytorch/pytorch/pull/152708
Features:
1. Adds inductor's estimate of flops and bandwidth to the json trace events that perfetto uses.
1. Only use the tflops estimation from triton if we don't have the info from the datasheet because Triton's estimates are inaccurate. I have a backlog item to fix triton flops estimation upstream. New `DeviceInfo` class, and new function `get_device_tflops`.
1. New helpers `countable_fx` and `count_flops_fx` helps get the flops of an `fx.Node`.
1. Extends Triton `torch.profiler` logging to `DebugAutotuner`.
1. New script `profile_analysis.py`: `--augment_trace` adds perf estimates to any perfetto json trace, `--analyze` creates a summary table of these perf estimates, and `--diff` will compare two traces side by side:
```python
Device(NVIDIA H100, 0):
Kernel Name | resnet Kernel Count | resnet FLOPS | resnet bw gbps | resnet Dur (ms) | resnet Achieved FLOPS % | resnet Achieved Bandwidth % | newresnet Kernel Count | newresnet FLOPS | newresnet bw gbps | newresnet Dur (ms) | newresnet Achieved FLOPS % | newresnet Achieved Bandwidth %
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
triton_poi_fused__native_batch_norm_legi | 24 | 0 | 0.11395268248131513 | 2.5919166666666666 | 0 | 0.003401572611382541 | 24 | 0 | 0.11395268248131513 | 2.5919166666666666 | 0 | 0.003401572611382541
sm90_xmma_fprop_implicit_gemm_f32f32_tf3 | 142 | 16932673552.422373 | 0.2585007824198784 | 12.441619718309857 | 0.08683422334575583 | 0.007716441266265022 | 142 | 16932673552.422373 | 0.2585007824198784 | 12.441619718309857 | 0.08683422334575583 | 0.007716441266265022
triton_red_fused__native_batch_norm_legi | 39 | 0 | 0.13990024992108846 | 5.752589743589743 | 0 | 0.004176126863316074 | 39 | 0 | 0.13990024992108846 | 5.752589743589743 | 0 | 0.004176126863316074
triton_poi_fused__native_batch_norm_legi | 25 | 0 | 0.31824055917536503 | 2.5291999999999994 | 0 | 0.009499718184339253 | 25 | 0 | 0.31824055917536503 | 2.5291999999999994 | 0 | 0.009499718184339253
void cutlass::Kernel2<cutlass_80_tensoro | 98 | 16211056473.596165 | 0.42972434051025826 | 7.130408163265306 | 0.08313362294151874 | 0.012827592254037562 | 98 | 16211056473.596165 | 0.42972434051025826 | 7.130408163265306 | 0.08313362294151874 | 0.012827592254037562
triton_red_fused__native_batch_norm_legi | 73 | 0 | 0.3225381327611705 | 9.987068493150682 | 0 | 0.009628003963020014 | 73 | 0 | 0.3225381327611705 | 9.987068493150682 | 0 | 0.009628003963020014
triton_poi_fused__native_batch_norm_legi | 15 | 0 | 1.4491211346487216 | 4.439333333333333 | 0 | 0.043257347302946926 | 15 | 0 | 1.4491211346487216 | 4.439333333333333 | 0 | 0.043257347302946926
void cutlass::Kernel2<cutlass_80_tensoro | 186 | 14501701145.337954 | 0.2667131401910989 | 7.873865591397849 | 0.07436769818122027 | 0.007961586274361157 | 186 | 14501701145.337954 | 0.2667131401910989 | 7.873865591397849 | 0.07436769818122027 | 0.007961586274361157
triton_poi_fused__native_batch_norm_legi | 33 | 0 | 1.4924556538193923 | 4.3101515151515155 | 0 | 0.044550915039384846 | 33 | 0 | 1.4924556538193923 | 4.3101515151515155 | 0 | 0.044550915039384846
triton_red_fused__native_batch_norm_legi | 29 | 0 | 0.25562590522631107 | 6.296275862068965 | 0 | 0.007630624036606301 | 29 | 0 | 0.25562590522631107 | 6.296275862068965 | 0 | 0.007630624036606301
triton_poi_fused__native_batch_norm_legi | 13 | 0 | 0.5870562174192726 | 2.7397692307692307 | 0 | 0.01752406619162008 | 13 | 0 | 0.5870562174192726 | 2.7397692307692307 | 0 | 0.01752406619162008
triton_poi_fused__native_batch_norm_legi | 34 | 0 | 0.41409928846284 | 2.853588235294117 | 0 | 0.012361172789935523 | 34 | 0 | 0.41409928846284 | 2.853588235294117 | 0 | 0.012361172789935523
triton_per_fused__native_batch_norm_legi | 34 | 0 | 0.11705315007018151 | 3.460647058823529 | 0 | 0.0034941238826919864 | 34 | 0 | 0.11705315007018151 | 3.460647058823529 | 0 | 0.0034941238826919864
triton_poi_fused__native_batch_norm_legi | 16 | 0 | 0.17207853197124584 | 2.3459375000000002 | 0 | 0.005136672596156592 | 16 | 0 | 0.17207853197124584 | 2.3459375000000002 | 0 | 0.005136672596156592
triton_per_fused__native_batch_norm_legi | 30 | 0 | 0.2639714322022256 | 6.131199999999999 | 0 | 0.007879744244842555 | 30 | 0 | 0.2639714322022256 | 6.131199999999999 | 0 | 0.007879744244842555
sm90_xmma_fprop_implicit_gemm_f32f32_tf3 | 100 | 11875430356.891787 | 0.19494470869421385 | 16.36534 | 0.06089964285585531 | 0.005819245035648175 | 100 | 11875430356.891787 | 0.19494470869421385 | 16.36534 | 0.06089964285585531 | 0.005819245035648175
triton_poi_fused__native_batch_norm_legi | 8 | 0 | 0.9854096626224687 | 3.2757500000000004 | 0 | 0.029415213809625928 | 8 | 0 | 0.9854096626224687 | 3.2757500000000004 | 0 | 0.029415213809625928
void cublasLt::splitKreduce_kernel<32, 1 | 56 | 34377923395.147064 | 0.8310300045762317 | 3.4199999999999986 | 0.17629704305203628 | 0.024806865808245714 | 56 | 34377923395.147064 | 0.8310300045762317 | 3.4199999999999986 | 0.17629704305203628 | 0.024806865808245714
triton_poi_fused__native_batch_norm_legi | 23 | 0 | 0.9944002965861103 | 3.2431304347826084 | 0 | 0.02968359094286896 | 23 | 0 | 0.9944002965861103 | 3.2431304347826084 | 0 | 0.02968359094286896
triton_per_fused__native_batch_norm_legi | 10 | 0 | 0.1826801058931057 | 4.428800000000001 | 0 | 0.00545313748934644 | 10 | 0 | 0.1826801058931057 | 4.428800000000001 | 0 | 0.00545313748934644
triton_poi_fused__native_batch_norm_legi | 10 | 0 | 0.3168973585366449 | 2.5471999999999997 | 0 | 0.009459622642884923 | 10 | 0 | 0.3168973585366449 | 2.5471999999999997 | 0 | 0.009459622642884923
triton_poi_fused__native_batch_norm_legi | 34 | 0 | 1.1463614897015777 | 4.124323529411764 | 0 | 0.03421974596124114 | 34 | 0 | 1.1463614897015777 | 4.124323529411764 | 0 | 0.03421974596124114
void cask_plugin_cudnn::xmma_cudnn::init | 44 | 44045510816.64277 | 2.0661232850348643 | 3.6887499999999993 | 0.22587441444432194 | 0.06167532194133924 | 44 | 44045510816.64277 | 2.0661232850348643 | 3.6887499999999993 | 0.22587441444432194 | 0.06167532194133924
sm90_xmma_fprop_implicit_gemm_f32f32_tf3 | 95 | 7876855400.165316 | 0.4694941555946739 | 18.224315789473682 | 0.04039413025725802 | 0.014014750913273854 | 95 | 7876855400.165316 | 0.4694941555946739 | 18.224315789473682 | 0.04039413025725802 | 0.014014750913273854
triton_per_fused__native_batch_norm_legi | 41 | 0 | 0.06825669875995298 | 3.0384146341463416 | 0 | 0.002037513395819492 | 41 | 0 | 0.06825669875995298 | 3.0384146341463416 | 0 | 0.002037513395819492
triton_poi_fused__native_batch_norm_legi | 23 | 0 | 0.08808154712430301 | 2.3275652173913044 | 0 | 0.0026292999141582997 | 23 | 0 | 0.08808154712430301 | 2.3275652173913044 | 0 | 0.0026292999141582997
triton_per_fused__native_batch_norm_legi | 40 | 0 | 0.18179321034952417 | 4.556825 | 0 | 0.005426662995508183 | 40 | 0 | 0.18179321034952417 | 4.556825 | 0 | 0.005426662995508183
triton_poi_fused__native_batch_norm_legi | 15 | 0 | 0.5887415155454232 | 2.783866666666667 | 0 | 0.017574373598370836 | 15 | 0 | 0.5887415155454232 | 2.783866666666667 | 0 | 0.017574373598370836
void cutlass::Kernel2<cutlass_80_tensoro | 38 | 14242013806.264643 | 0.256592404353939 | 7.217631578947369 | 0.0730359682372546 | 0.007659474756834 | 38 | 14242013806.264643 | 0.256592404353939 | 7.217631578947369 | 0.0730359682372546 | 0.007659474756834
triton_poi_fused__native_batch_norm_legi | 21 | 0 | 0.5842860973430516 | 2.7779047619047623 | 0 | 0.017441376040091088 | 21 | 0 | 0.5842860973430516 | 2.7779047619047623 | 0 | 0.017441376040091088
triton_per_fused__native_batch_norm_legi | 16 | 0 | 0.11509365173486417 | 3.5959375000000002 | 0 | 0.0034356313950705724 | 16 | 0 | 0.11509365173486417 | 3.5959375000000002 | 0 | 0.0034356313950705724
triton_poi_fused__native_batch_norm_legi | 14 | 0 | 0.1704672000243914 | 2.4044285714285714 | 0 | 0.00508857313505646 | 14 | 0 | 0.1704672000243914 | 2.4044285714285714 | 0 | 0.00508857313505646
triton_poi_fused__native_batch_norm_legi | 58 | 0 | 2.307520779930795 | 8.190706896551722 | 0 | 0.06888121731136704 | 58 | 0 | 2.307520779930795 | 8.190706896551722 | 0 | 0.06888121731136704
triton_per_fused__native_batch_norm_legi | 29 | 0 | 0.037243248971881276 | 3.0277586206896556 | 0 | 0.001111738775280038 | 29 | 0 | 0.037243248971881276 | 3.0277586206896556 | 0 | 0.001111738775280038
triton_poi_fused__native_batch_norm_legi | 20 | 0 | 0.04741699795428918 | 2.2911500000000005 | 0 | 0.0014154327747549007 | 20 | 0 | 0.04741699795428918 | 2.2911500000000005 | 0 | 0.0014154327747549007
triton_per_fused__native_batch_norm_legi | 25 | 0 | 0.13357016893727824 | 3.37536 | 0 | 0.003987169222008305 | 25 | 0 | 0.13357016893727824 | 3.37536 | 0 | 0.003987169222008305
triton_poi_fused__native_batch_norm_legi | 13 | 0 | 0.3089862268300253 | 2.8111538461538457 | 0 | 0.009223469457612694 | 13 | 0 | 0.3089862268300253 | 2.8111538461538457 | 0 | 0.009223469457612694
triton_poi_fused__native_batch_norm_legi | 17 | 0 | 0.3129385387909844 | 2.673 | 0 | 0.009341448919133863 | 17 | 0 | 0.3129385387909844 | 2.673 | 0 | 0.009341448919133863
triton_per_fused__native_batch_norm_legi | 19 | 0 | 0.2215568162533158 | 3.8837368421052636 | 0 | 0.0066136363060691275 | 19 | 0 | 0.2215568162533158 | 3.8837368421052636 | 0 | 0.0066136363060691275
std::enable_if<!(false), void>::type int | 23 | 504916805.19297093 | 1.0118296096314707 | 8.113913043478261 | 0.0025893169497075447 | 0.030203868944223014 | 23 | 504916805.19297093 | 1.0118296096314707 | 8.113913043478261 | 0.0025893169497075447 | 0.030203868944223014
triton_poi_fused_add_copy__38 | 56 | 0 | 0 | 2.132482142857143 | 0 | 0 | 56 | 0 | 0 | 2.132482142857143 | 0 | 0
triton_poi_fused_convolution_0 | 18 | 0 | 0.43458610794936897 | 2.773333333333334 | 0 | 0.012972719640279667 | 18 | 0 | 0.43458610794936897 | 2.773333333333334 | 0 | 0.012972719640279667
triton_poi_fused_convolution_1 | 17 | 0 | 0.028816312469162712 | 2.6145882352941174 | 0 | 0.0008601884319153051 | 17 | 0 | 0.028816312469162712 | 2.6145882352941174 | 0 | 0.0008601884319153051
void convolve_common_engine_float_NHWC<f | 44 | 8641868995.31118 | 0.024730540008465626 | 25.87327272727273 | 0.04431727689903169 | 0.0007382250748795709 | 44 | 8641868995.31118 | 0.024730540008465626 | 25.87327272727273 | 0.04431727689903169 | 0.0007382250748795709
triton_per_fused__native_batch_norm_legi | 12 | 0 | 0.6809930918986744 | 4.82675 | 0 | 0.020328151996975356 | 12 | 0 | 0.6809930918986744 | 4.82675 | 0 | 0.020328151996975356
triton_per_fused__native_batch_norm_legi | 14 | 0 | 0.02883030597936608 | 2.6651428571428575 | 0 | 0.0008606061486377935 | 14 | 0 | 0.02883030597936608 | 2.6651428571428575 | 0 | 0.0008606061486377935
triton_per_fused__native_batch_norm_legi | 16 | 0 | 0.0014658988233201874 | 2.098 | 0 | 4.375817383045335e-05 | 16 | 0 | 0.0014658988233201874 | 2.098 | 0 | 4.375817383045335e-05
triton_poi_fused__native_batch_norm_legi | 13 | 0 | 0.9926297180284697 | 3.2367692307692306 | 0 | 0.02963073785159611 | 13 | 0 | 0.9926297180284697 | 3.2367692307692306 | 0 | 0.02963073785159611
triton_poi_fused__native_batch_norm_legi | 9 | 0 | 1.3008817095666507 | 3.0863333333333336 | 0 | 0.03883228983781048 | 9 | 0 | 1.3008817095666507 | 3.0863333333333336 | 0 | 0.03883228983781048
void at::native::(anonymous namespace):: | 98 | 0 | 0.09174335613709389 | 4.408520408163265 | 0 | 0.0027386076458833994 | 98 | 0 | 0.09174335613709389 | 4.408520408163265 | 0 | 0.0027386076458833994
void at::native::vectorized_elementwise_ | 7 | 0 | 0 | 1.7278571428571428 | 0 | 0 | 7 | 0 | 0 | 1.7278571428571428 | 0 | 0
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,937,133,660
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"open source",
"topic: not user facing",
"keep-going"
] | 2
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [b18528c455d0297b89b255e93b86ff668069459f](https://github.com/intel/torch-xpu-ops/commit/b18528c455d0297b89b255e93b86ff668069459f), include
- Bugfix of performance issue relating to GRF configuration.
| true
|
2,937,106,710
|
Add "xpu" to __all__ for torch/version.py
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: xpu"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149695
As the title stated.
| true
|
2,937,082,208
|
torch.det results in nans for torch.func.hessian
|
alecjacobson
|
open
|
[
"module: numerical-stability",
"triaged",
"module: functorch"
] | 0
|
NONE
|
### 🐛 Describe the bug
This minified example shows `torch.func.hessian` acting up when the function in question involves `torch.det`.
```python
import torch
x = torch.tensor([[1,0],[1,1]],dtype=torch.float64,requires_grad=True)
def phi(x):
A = torch.tensor([[1,0],[-1,1]],dtype=torch.float64)
J = x @ A
return torch.det(J)
value = phi(x)
print(value)
value.backward()
grad = x.grad
x.grad = None
# all nans
H = torch.func.hessian(phi)(x)
print(H)
# correct hessian
H = torch.autograd.functional.hessian(phi, x)
print(H)
```
It prints
```
tensor(1., dtype=torch.float64, grad_fn=<LinalgDetBackward0>)
tensor([[[[nan, nan],
[nan, nan]],
[[nan, nan],
[nan, nan]]],
[[[nan, nan],
[nan, nan]],
[[nan, nan],
[nan, nan]]]], dtype=torch.float64, grad_fn=<ViewBackward0>)
tensor([[[[ 0., 0.],
[ 0., 1.]],
[[ 0., 0.],
[-1., 0.]]],
[[[ 0., -1.],
[ 0., 0.]],
[[ 1., 0.],
[ 0., 0.]]]], dtype=torch.float64)
```
showing that `torch.autograd.functional.hessian` is giving the correct result.
### Versions
--2025-03-20 22:25:27-- https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.108.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24440 (24K) [text/plain]
Saving to: ‘collect_env.py’
collect_env.py 100%[=====================>] 23.87K --.-KB/s in 0.1s
2025-03-20 22:25:28 (231 KB/s) - ‘collect_env.py’ saved [24440/24440]
Collecting environment information...
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.31.1
Libc version: N/A
Python version: 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:48:25) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.2
[pip3] numpyeigen-example-project==0.0.0
[pip3] onnx==1.12.0
[pip3] onnx2torch==1.5.6
[pip3] onnxruntime==1.14.1
[pip3] pytorch3d==0.7.4
[pip3] rotary-embedding-torch==0.2.3
[pip3] torch==2.6.0
[pip3] torchvision==0.18.1
[conda] numpy 2.0.2 pypi_0 pypi
[conda] numpyeigen-example-project 0.0.0 dev_0 <develop>
[conda] onnx2torch 1.5.6 pypi_0 pypi
[conda] pytorch3d 0.7.4 pypi_0 pypi
[conda] rotary-embedding-torch 0.2.3 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchvision 0.18.1 pypi_0 pypi
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,937,080,358
|
Fix broken LazyLinear init
|
vmoens
|
closed
|
[
"module: nn",
"Merged",
"ciflow/trunk",
"release notes: nn"
] | 7
|
CONTRIBUTOR
|
Fixes #149691
I beleive it does not impact negatively the fix in https://github.com/pytorch/pytorch/pull/147599 as the tests stilll pass but @FFFrog should confirm.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,937,073,364
|
Improve error message for `torch.fft.ihfft2` when input's dtype is complex
|
shink
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Fixes #149625
For the case mentioned in the issue, will get:
```
RuntimeError: Only supports floating-point dtypes, but found: ComplexDouble
```
| true
|
2,937,072,396
|
LazyLinear broken by new init logic
|
vmoens
|
closed
|
[
"high priority",
"module: nn",
"triaged",
"module: regression",
"module: lazy"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The [following PR](https://github.com/pytorch/pytorch/pull/147599) broke the init of lazy linear:
```
python -c """
from torch import nn
import torch
l = nn.LazyLinear(4)
print(l(torch.randn(3)))
print(l(torch.randn(3)))
"""
```
prints
```
tensor([0., 0., 0., 0.], grad_fn=<ViewBackward0>)
```
This is because now reset_parameters is ignored because the call to `reset_parameters()` here:
https://github.com/pytorch/pytorch/blob/362b40939dd6faeebf0569beac563afa51e81dcd/torch/nn/modules/linear.py#L291
triggers this block
https://github.com/pytorch/pytorch/blob/362b40939dd6faeebf0569beac563afa51e81dcd/torch/nn/modules/linear.py#L281-L283
but now the in_features are always 0 at that point (before, the value was set to a non-zero value before that call).
## Solution
I will submit a PR soon
### Versions
nightly
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,937,022,046
|
flip test_cache
|
laithsakka
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149690
* #149267
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,936,952,481
|
[Dynamo] Cleanup state management for ctx managers
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
Removes state indirection for ctx managers. This isn't needed anymore since VTs are mutable.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149689
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,947,478
|
[hop] support base_hop._gen_schema
|
ydwu4
|
closed
|
[
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: jit",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150369
* __->__ #149688
This PR creates two utils for generating a schema for hops from example inputs and use base hop as an exmaple.
1. HopArgumentInfoGen creates an argument or an output schema with mutation information.
2. CFuncitonSchemaGen piece together the argument info of inputs and outputs and produces torch._C.FunctionSchema.
is_write attribute of argument info can be computed. Note that the is_write annotation only works when the inputs are flattened (e.g. cannot support mutation inside tuple). We need special handling the case where we have tuple inputs like cond.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,936,894
|
[MPS] Add support for `modified_bessel_k1` to eager and inductor.
|
dcci
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
MEMBER
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,936,928,150
|
[Dynamo] Remove partial graph printing on data-dependent graph breaks
|
mlazos
|
closed
|
[
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
CONTRIBUTOR
|
Checking with Bob offline, but this can be achieved on a conditional basis with `TORCH_LOGS="graph_code"`
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149686
* #149685
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,928,096
|
[Hierarchical Compilation] Handle origin nodes without children
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3
|
CONTRIBUTOR
|
Bug discovered running Hierarchical Compilation on HF.
I don't have a smaller repro for this unfortunately.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149686
* __->__ #149685
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,907,478
|
Add elu as core ATen
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149684
Differential Revision: [D71590420](https://our.internmc.facebook.com/intern/diff/D71590420/)
| true
|
2,936,883,372
|
Do not depend on numpy during the import
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"module: functorch",
"release notes: torch.func"
] | 5
|
CONTRIBUTOR
|
But a good followup would be to use torch primitives instead of numpy here
Fixes https://github.com/pytorch/pytorch/issues/149681
Test plan: Monkey-patch 2.7.0-rc and run `python -c "import torch;print(torch.compile(lambda x:x.sin() + x.cos())(torch.rand(32)))"`
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,936,858,272
|
[BE] Introduce `lapack_work_to_int` function
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"topic: bug fixes"
] | 3
|
CONTRIBUTOR
|
That could be used to safely cast floating values to int by adding an ULP, which is a followup after https://github.com/pytorch/pytorch/pull/146456
Fixes https://github.com/pytorch/pytorch/issues/149591
(Not adding unittest as it's just going to be too slow)
Test plan:
```
% python3 -c "import torch; torch.pinverse(torch.rand(50000, 8193))"
```
Before the change errored out with
```
RuntimeError: false INTERNAL ASSERT FAILED at "pytorch/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp":1605, please report a bug to PyTorch. linalg.svd: Argument 12 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
| true
|
2,936,832,793
|
Torch.compile is failing if numpy is not installed
|
atalman
|
open
|
[
"high priority",
"module: binaries",
"triaged",
"oncall: pt2"
] | 11
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Install torch:
``pip3 install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu126``
Run smoke test on torch 2.7 rc1:
https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/smoke_test/smoke_test.py
Output
```
python3 smoke_test.py --package torchonly
/usr/local/lib64/python3.9/site-packages/torch/_subclasses/functional_tensor.py:276: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
torch: 2.7.0+cu126
ATen/Parallel:
at::get_num_threads() : 8
at::get_num_interop_threads() : 16
OpenMP 201511 (a.k.a. OpenMP 4.5)
omp_get_max_threads() : 8
Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 8
Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d)
std::thread::hardware_concurrency() : 16
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
Skip version check for channel None as stable version is None
Testing smoke_test_conv2d
Testing smoke_test_linalg on cpu
Numpy check skipped. Numpy is not installed.
Testing smoke_test_compile for cpu and torch.float16
Traceback (most recent call last):
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 430, in <module>
main()
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 424, in main
smoke_test_cuda(
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 222, in smoke_test_cuda
smoke_test_compile("cuda" if torch.cuda.is_available() else "cpu")
File "/pytorch/.ci/pytorch/smoke_test/smoke_test.py", line 322, in smoke_test_compile
x_pt2 = torch.compile(foo)(x)
File "/usr/local/lib64/python3.9/site-packages/torch/__init__.py", line 2574, in compile
return torch._dynamo.optimize(
File "/usr/local/lib64/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 944, in optimize
return _optimize(rebuild_ctx, *args, **kwargs)
File "/usr/local/lib64/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 998, in _optimize
backend = get_compiler_fn(backend)
File "/usr/local/lib64/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 878, in get_compiler_fn
from .repro.after_dynamo import wrap_backend_debug
File "/usr/local/lib64/python3.9/site-packages/torch/_dynamo/repro/after_dynamo.py", line 35, in <module>
from torch._dynamo.debug_utils import (
File "/usr/local/lib64/python3.9/site-packages/torch/_dynamo/debug_utils.py", line 44, in <module>
from torch._dynamo.testing import rand_strided
File "/usr/local/lib64/python3.9/site-packages/torch/_dynamo/testing.py", line 33, in <module>
from torch._dynamo.backends.debugging import aot_eager
File "/usr/local/lib64/python3.9/site-packages/torch/_dynamo/backends/debugging.py", line 35, in <module>
from functorch.compile import min_cut_rematerialization_partition
File "/usr/local/lib64/python3.9/site-packages/functorch/compile/__init__.py", line 2, in <module>
from torch._functorch.aot_autograd import (
File "/usr/local/lib64/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 135, in <module>
from .partitioners import default_partition
File "/usr/local/lib64/python3.9/site-packages/torch/_functorch/partitioners.py", line 43, in <module>
from ._activation_checkpointing.knapsack_evaluator import KnapsackEvaluator
File "/usr/local/lib64/python3.9/site-packages/torch/_functorch/_activation_checkpointing/knapsack_evaluator.py", line 5, in <module>
import numpy as np
ModuleNotFoundError: No module named 'numpy'
```
I believe numpy is optional dependency hence we should address this for release 2.7.0
This was discovered after landing the PR that skips numpy test: https://github.com/pytorch/pytorch/pull/149356 and using validation workflow without the numpy.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar @chauhang @penguinwu
### Versions
2.7.0
| true
|
2,936,830,681
|
[MPS] nanmedian with dims
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: improvements",
"module: mps",
"release notes: mps"
] | 4
|
COLLABORATOR
|
Third most voted op from #77764
Tests were deleted because they are covered by the regular test_output_match tests so those were redundant and were added in the last PR before the nanmedian dim version would be implemented
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,936,829,272
|
cd: There's no way to test changes to container images for binary builds
|
seemethere
|
closed
|
[
"oncall: releng",
"triaged"
] | 2
|
MEMBER
|
Was doing a bit of exploration in https://github.com/pytorch/pytorch/pull/149675 when I realized that we actually hardcode all of our binary builds to run against the `main` tag. (see [logs](https://github.com/pytorch/pytorch/actions/runs/13980596619/job/39144800674?pr=149675#step:13:86))
https://github.com/pytorch/pytorch/blob/1d3c50fcc5156e6509d2a64727401c01a5998df0/.github/workflows/generated-linux-binary-manywheel-nightly.yml#L58
## What does this mean?
When any developer makes changes to our container images that are utilized for docker builds the binary builds we trigger using `ciflow/binaries` utilize the upstream main tag instead of utilizing a new image that would be built from the changes (similar to how our pull request / trunk workflows work).
## Why is this bad?
This means that any developer making changes to the container images for binary builds can't actually test their changes until they land into `main` since that's when the `main` tagged images are built.
## How should we fix this?
I propose starting out by looking at how calculate-docker-image works and utilizing that within our binary build workflows.
https://github.com/pytorch/pytorch/blob/1d3c50fcc5156e6509d2a64727401c01a5998df0/.github/workflows/_linux-build.yml#L135-L140
This should be conditional though and we should only utilize those images when the request comes from a ciflow/binaries request and utilize the main tagged images for nightly / release builds.
| true
|
2,936,824,365
|
[ONNX] Set is_in_onnx_export for dynamo=True
|
titaiwangms
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: new features",
"suppress-bc-linter"
] | 4
|
COLLABORATOR
|
Fixes #149141
| true
|
2,936,801,101
|
[ROCm][TunableOp] Fix offline tuning for ScaledGEMM.
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
The main purpose of this PR is to fix offline tuning for ScaledGEMM. The previous UT passed because it was not strict enough. Additionally:
- All the offline tuning tests now do a comparison with the online results to ensure that ParamSignature match.
- We raise an error if submatrices are encountered as this is only supported in online tuning mode.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,936,763,792
|
[dynamo] keep chained exceptions in user-facing tracebacks
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compile ux"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149676
This preserves graph breaks in the case that one graph break directly causes another, e.g. graph breaks in generic context managers.
```python
import torch
class CtxMgr:
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
@torch.compile(backend="eager", fullgraph=True)
def fn():
with CtxMgr():
with CtxMgr():
pass
with CtxMgr():
with CtxMgr():
pass
torch._dynamo.graph_break()
fn()
```
Output:
```
torch._dynamo.exc.Unsupported: Call to `torch._dynamo.graph_break()`
Explanation: User-inserted graph break. Message: None
Hint: Remove the `torch._dynamo.graph_break()` call.
Developer debug context: Called `torch._dynamo.graph_break()` with args `[]`, kwargs `{}`
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/data/users/williamwen/pytorch/playground.py", line 23, in <module>
fn()
File "/data/users/williamwen/pytorch/torch/_dynamo/eval_frame.py", line 664, in _fn
raise e.with_traceback(None) from e.__cause__
torch._dynamo.exc.Unsupported: Graph break under GenericContextWrappingVariable
Explanation: Attempted to graph break in an active context manager(s) that doesn't support graph breaking.
Hint: Move the offending context manager(s) to outside the compiled region.
Hint: This graph break may have been caused by an earlier graph break. Resolving the earlier graph break may resolve this one.
Developer debug context: Active generic context managers: [GenericContextWrappingVariable(CtxMgr), GenericContextWrappingVariable(CtxMgr)]
from user code:
File "/data/users/williamwen/pytorch/playground.py", line 20, in fn
torch._dynamo.graph_break()
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
Note in particular that both graph breaks (torch._dynamo.graph_break and graph break in context manager) are present in the logs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,761,180
|
DO NOT MERGE: Testing sequential builds for cuda + cpu
|
seemethere
|
open
|
[
"topic: not user facing"
] | 1
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149675
* #143672
* #148419
Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
| true
|
2,936,753,184
|
[DTensor] Document uneven sharding semantics
|
wconstab
|
closed
|
[
"oncall: distributed",
"module: dtensor",
"release notes: distributed (dtensor)"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149764
* __->__ #149674
Defines and documents behaviors that are implicit in DTensor design
today.
Partially addresses #143372
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,936,718,583
|
Extract reusable portions of elu_kernel into header
|
swolchok
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149673
Similar to #140425, we are making the implementation usable via header-only code sharing.
Review note: #62546 by @yanbing-j removed expm1 usage from this path. I don't know why and expm1 should be more efficient, so I've put it back. Please let me know if there is a good reason I shouldn't.
Testing: existing correctness tests should cover.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,936,718,337
|
Add release branch push triggers to inductor-rocm-mi300.yml
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 5
|
COLLABORATOR
|
In similar vein as https://github.com/pytorch/pytorch/pull/149517
When we added the rocm-mi300.yml earlier this year, we had lower capacity and we were just pipecleaning the workflow, so we set the trigger to only respond to pushes to main branch. But now we have more stability as well as capacity, and we would really like to ensure that the release branch is being tested on MI300s as well.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,936,716,047
|
DISABLED test_ddp_graphs (__main__.StructuredTraceTest)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped"
] | 3
|
NONE
|
Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_ddp_graphs&suite=StructuredTraceTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39135025462).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_ddp_graphs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_structured_trace.py", line 617, in test_ddp_graphs
self.assertExpectedInline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3096, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '{"dy[3805 chars]id": 3, "ndim": 2, "dtype": "torch.float32", "[8572 chars]0}\n' != '{"dy[3805 chars]id": 8, "ndim": 2, "dtype": "torch.float32", "[9766 chars]0}\n'
{"dynamo_start": {"stack": "STACK"}, "rank": 0, "frame_id": 0, "frame_compile_id": 0, "attempt": 0}
{"artifact": {"name": "dynamo_graph_break_reason", "encoding": "string"}, "rank": 0, "frame_id": 0, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"dynamo_cpp_guards_str": {}, "rank": 0, "frame_id": 0, "frame_compile_id": 0, "attempt": 1, "has_payload": "HASH"}
{"compilation_metrics": "METRICS", "rank": 0, "frame_id": 0, "frame_compile_id": 0, "attempt": 1}
{"dynamo_start": {"stack": "STACK"}, "rank": 0, "frame_id": 1, "frame_compile_id": 0, "attempt": 0}
{"artifact": {"name": "dynamo_graph_break_reason", "encoding": "string"}, "rank": 0, "frame_id": 1, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"dynamo_cpp_guards_str": {}, "rank": 0, "frame_id": 1, "frame_compile_id": 0, "attempt": 1, "has_payload": "HASH"}
{"compilation_metrics": "METRICS", "rank": 0, "frame_id": 1, "frame_compile_id": 0, "attempt": 1}
{"dynamo_start": {"stack": "STACK"}, "rank": 0, "frame_id": 2, "frame_compile_id": 0, "attempt": 0}
{"artifact": {"name": "dynamo_graph_break_reason", "encoding": "string"}, "rank": 0, "frame_id": 2, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"dynamo_cpp_guards_str": {}, "rank": 0, "frame_id": 2, "frame_compile_id": 0, "attempt": 1, "has_payload": "HASH"}
{"compilation_metrics": "METRICS", "rank": 0, "frame_id": 2, "frame_compile_id": 0, "attempt": 1}
{"dynamo_start": {"stack": "STACK"}, "rank": 0, "frame_id": 3, "frame_compile_id": 0, "attempt": 0}
{"compilation_metrics": "METRICS", "rank": 0, "frame_id": 3, "frame_compile_id": 0, "attempt": 0}
{"dynamo_start": {"stack": "STACK"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_storage": {"id": 0, "describer_id": "ID", "size": 4194304}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_tensor": {"id": 0, "ndim": 2, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024, 1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1024, 1], "storage": 0, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_source": {"describer_id": "ID", "id": 0, "source": "L['self']._modules['layers']._modules['0']._parameters['weight']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_storage": {"id": 1, "describer_id": "ID", "size": 4096}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_tensor": {"id": 1, "ndim": 1, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1], "storage": 1, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_source": {"describer_id": "ID", "id": 1, "source": "L['self']._modules['layers']._modules['0']._parameters['bias']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_storage": {"id": 2, "describer_id": "ID", "size": 4194304}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_tensor": {"id": 2, "ndim": 2, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024, 1024], "is_leaf": true, "stride": [1024, 1], "storage": 2, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_source": {"describer_id": "ID", "id": 2, "source": "L['x']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_storage": {"id": 3, "describer_id": "ID", "size": 4194304}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
- {"describe_tensor": {"id": 3, "ndim": 2, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024, 1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1024, 1], "storage": 3, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^^^^^^^^^^^^
+ {"describe_tensor": {"id": 8, "ndim": 2, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024, 1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1024, 1], "storage": 3, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^^^^^^^^^^^^
- {"describe_source": {"describer_id": "ID", "id": 3, "source": "L['self']._modules['layers']._modules['1']._parameters['weight']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^
+ {"describe_source": {"describer_id": "ID", "id": 8, "source": "L['self']._modules['layers']._modules['1']._parameters['weight']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^
{"describe_storage": {"id": 4, "describer_id": "ID", "size": 4096}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
- {"describe_tensor": {"id": 4, "ndim": 1, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1], "storage": 4, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^^^^^^^^^
+ {"describe_tensor": {"id": 9, "ndim": 1, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1], "storage": 4, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^^^^^^^^^
- {"describe_source": {"describer_id": "ID", "id": 4, "source": "L['self']._modules['layers']._modules['1']._parameters['bias']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^
+ {"describe_source": {"describer_id": "ID", "id": 9, "source": "L['self']._modules['layers']._modules['1']._parameters['bias']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^
{"dynamo_output_graph": {"sizes": {"l_self_modules_layers_modules_0_parameters_weight_": [1024, 1024], "l_self_modules_layers_modules_0_parameters_bias_": [1024], "l_x_": [1024, 1024], "l_self_modules_layers_modules_1_parameters_weight_": [1024, 1024], "l_self_modules_layers_modules_1_parameters_bias_": [1024], "input_1": [1024, 1024], "input_2": [1024, 1024]}}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"optimize_ddp_split_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"optimize_ddp_split_child": {"name": "submod_0"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"optimize_ddp_split_child": {"name": "submod_1"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"describe_storage": {"id": 0, "describer_id": "ID", "size": 4194304}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_tensor": {"id": 0, "ndim": 2, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024, 1024], "is_leaf": true, "stride": [1024, 1], "storage": 0, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_source": {"describer_id": "ID", "id": 0, "source": "L['x']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_storage": {"id": 1, "describer_id": "ID", "size": 4194304}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_tensor": {"id": 1, "ndim": 2, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024, 1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1024, 1], "storage": 1, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_source": {"describer_id": "ID", "id": 1, "source": "L['self']._modules['layers']._modules['0']._parameters['weight']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_storage": {"id": 2, "describer_id": "ID", "size": 4096}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_tensor": {"id": 2, "ndim": 1, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1], "storage": 2, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"describe_source": {"describer_id": "ID", "id": 2, "source": "L['self']._modules['layers']._modules['0']._parameters['bias']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
{"inductor_pre_grad_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"artifact": {"name": "before_recompile_pre_grad", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"artifact": {"name": "after_recompile_pre_grad", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"aot_joint_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"artifact": {"name": "torch._functorch.config", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"artifact": {"name": "aot_forward_graph_fw_metadata", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"aot_forward_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"aot_backward_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
+ {"artifact": {"name": "fx_graph_runnable", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
+ {"artifact": {"name": "before_recompile_post_grad", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
+ {"artifact": {"name": "after_recompile_post_grad", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
+ {"inductor_post_grad_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"inductor_output_code": {"filename": "FILENAME"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
- {"artifact": {"name": "fx_graph_cache_hit", "encoding": "json"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
? ^ ^
+ {"artifact": {"name": "fx_graph_cache_miss", "encoding": "json"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
? ^ ^^
{"artifact": {"name": "aotautograd_cache_bypass", "encoding": "json"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"describe_storage": {"id": 16, "describer_id": "ID", "size": 4194304}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
- {"describe_tensor": {"id": 16, "ndim": 2, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024, 1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1024, 1], "storage": 16, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^^^^^^^^^^^^^
+ {"describe_tensor": {"id": 29, "ndim": 2, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024, 1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1024, 1], "storage": 16, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^^^^^^^^^^^^^
- {"describe_source": {"describer_id": "ID", "id": 16, "source": "L['self']._modules['layers']._modules['1']._parameters['weight']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^
+ {"describe_source": {"describer_id": "ID", "id": 29, "source": "L['self']._modules['layers']._modules['1']._parameters['weight']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^
{"describe_storage": {"id": 17, "describer_id": "ID", "size": 4096}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
- {"describe_tensor": {"id": 17, "ndim": 1, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1], "storage": 17, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^^^^^^^^^^^^
+ {"describe_tensor": {"id": 30, "ndim": 1, "dtype": "torch.float32", "device": "device(type='cuda', index=0)", "size": [1024], "is_leaf": true, "requires_grad": true, "is_parameter": true, "stride": [1], "storage": 17, "view_func": "VIEW_FUNC", "describer_id": "ID"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ++++++++++++ ^
- {"describe_source": {"describer_id": "ID", "id": 17, "source": "L['self']._modules['layers']._modules['1']._parameters['bias']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^
+ {"describe_source": {"describer_id": "ID", "id": 30, "source": "L['self']._modules['layers']._modules['1']._parameters['bias']"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
? ^^
{"inductor_pre_grad_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"artifact": {"name": "before_recompile_pre_grad", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"artifact": {"name": "after_recompile_pre_grad", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"aot_joint_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"artifact": {"name": "torch._functorch.config", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"artifact": {"name": "aot_forward_graph_fw_metadata", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"aot_forward_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"aot_backward_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
+ {"artifact": {"name": "fx_graph_runnable", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
+ {"artifact": {"name": "before_recompile_post_grad", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
+ {"artifact": {"name": "after_recompile_post_grad", "encoding": "string"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
+ {"inductor_post_grad_graph": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"inductor_output_code": {"filename": "FILENAME"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
- {"artifact": {"name": "fx_graph_cache_hit", "encoding": "json"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
? ^ ^
+ {"artifact": {"name": "fx_graph_cache_miss", "encoding": "json"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
? ^ ^^
{"artifact": {"name": "aotautograd_cache_bypass", "encoding": "json"}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"dynamo_cpp_guards_str": {}, "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0, "has_payload": "HASH"}
{"compilation_metrics": "METRICS", "rank": 0, "frame_id": 4, "frame_compile_id": 0, "attempt": 0}
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_structured_trace.py StructuredTraceTest.test_ddp_graphs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_structured_trace.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_structured_trace.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000
| true
|
2,936,716,046
|
DISABLED test_lazy_module_speculation_log_divergence (__main__.NNModuleTests)
|
pytorch-bot[bot]
|
closed
|
[
"module: nn",
"triaged",
"module: flaky-tests",
"skipped"
] | 2
|
NONE
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_lazy_module_speculation_log_divergence&suite=NNModuleTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39123845623).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_lazy_module_speculation_log_divergence`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_modules.py", line 1695, in test_lazy_module_speculation_log_divergence
self.assertTrue(torch.allclose(expect_res, actual_res))
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.13/lib/python3.13/unittest/case.py", line 744, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
python test/dynamo/test_modules.py NNModuleTests.test_lazy_module_speculation_log_divergence
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_modules.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_modules.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @clee2000
| true
|
2,936,675,485
|
[easy] Do not logspam if static cuda launcher is disabled
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149629
* #149657
* #149442
* #149054
* __->__ #149669
No need to log.info every time someone runs with StaticCudaLauncher disabled.
Test plan: Run any benchmark and see that we don't spam the bypass message in logs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,936,642,158
|
[ONNX] Improve docstring of onnx symbolic ops
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: docs"
] | 6
|
COLLABORATOR
|
Better examples
| true
|
2,936,637,083
|
[invoke_subgraph][fake tensor cache] Add a finalizer for id hashed objects
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148953
* #150036
* __->__ #149667
* #149087
| true
|
2,936,628,754
|
Remove effect token unbacked bindings when removing with_effect nodes
|
yushangdi
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary:
**Export**
Fix `node.meta["unbacked_bindings"]`when removing `with_effect` wrapper in `ep.module()` call.
Test Plan:
```
buck run //caffe2/test:test_export -- -r test_custom_obj_unbacked_symint
```
Differential Revision: D71567148
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,936,627,016
|
Use source hashing to generate consistent symbolic ids
|
bobrenjc93
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ciflow/slow",
"ci-no-td"
] | 34
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149665
This PR was inspired by internal models that were cache missing due to PGO. At a high level the problem looks as follows
Run 1, Invocation 1: We do static compile, save some example values in PGO/automatic dynamic
Run 1, Invocation 2: We detect varying inputs, do dynamic compile, get a dynamic graph and save to PGO. Crucially what we save to PGO is actually a superset of what is actually dynamic. If we notice an input was varying, we mark it as dynamic in PGO even if later on that value gets specialized. When a value gets specialized, we actually remove the symbol from the graph. This results in an interesting conundrum where although we are producing the same isomorphic graph, PGO makes the second run cache miss. Let's see how....
Run 2, Invocation 1: We fetch the PGO, over-mark things as dynamic, get a fx graph, look it up in the cache and... whoops! cache miss! This is because of the aforementioned behavior where the PGO profile will cause us to over-allocate symbols. In practice this means we end up saving a graph in cache with symbols x:s1, y:s3 and on second attempt we cache miss with x:s1, y:s6 where symbols s3,s4,s5 were all optimistically marked dynamic by PGO and subsequently specialized.
We solve this problem by hashing the source names. This ensures somewhat stable assignment. To prevent catastrophic symbol collisions, we use linear probing to ensure no collisions.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,936,623,406
|
dynamo_compile: Log all compilation time under all_compilation_types
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149664
This counter is designed to include all compilation pytorch does (triton +
dynamo_compile). However this wasn't including all of dynamo compilation, since
it was put in at the fx_codegen_and_compile spot.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,936,615,810
|
[export] Add min & max as attribute hints to Dim
|
ColinPeppler
|
closed
|
[
"fb-exported",
"release notes: export"
] | 2
|
CONTRIBUTOR
|
Summary:
I see this pyre error.
```
Undefined attribute [16]: `torch.export.dynamic_shapes.Dim` has no attribute `max`.
```
Differential Revision: D71575304
| true
|
2,936,582,842
|
[ONNX] Use onnx Attention operator for scaled_dot_product_attention
|
csoiram
|
open
|
[
"module: onnx",
"triaged"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
ONNX have introduced MHA operator in opset 23 (https://onnx.ai/onnx/operators/onnx__Attention.html#l-onnx-op-attention-23). This could be used when exporting scaled_dot_product_attention to ONNX format. Currently the scaled_dot_product_attention gets broken down to constituent ops when exporting to ONNX which complicates the model and makes it harder to identify the attention block when compiling the network for inference in custom HW backends.
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,936,577,880
|
preserve custom meta in placeholders
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149661
Fixes #147338
Differential Revision: [D71573533](https://our.internmc.facebook.com/intern/diff/D71573533/)
| true
|
2,936,568,118
|
cd: Migrate binary builds off of Jinja
|
seemethere
|
open
|
[
"oncall: releng",
"triaged",
"better-engineering"
] | 0
|
MEMBER
|
The binary builds that we currently have are still on Jinja and they don't actually need to be.
This is the following list of things that need to be done:
- [ ] Nightly branch Linux
- [ ] Main branch Linux
- [ ] Nightly branch Windows
- [ ] Main branch Windows
- [ ] Nightly branch Windows arm64
- [ ] Nightly branch macOS
- [ ] Nightly branch Linux arm64
- [ ] Nightly branch s390x
The way I'm envisioning this move is that we should tackle Linux builds first since that'll probably be the easiest and then base all of the other moves off of this.
Some notes that I have for this move in particular:
* We've overloaded .github/scripts/generate_binary_build_matrix to include a ton of things that needs to be migrated to the jinja-less workflows including:
* _PYTORCH_EXTRA_INSTALL_REQUIREMENTS_
* `-full` package building
| true
|
2,936,563,589
|
Fix broken dynamo_timed test due to python_version field
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149659
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,552,515
|
unexpected kwarg 'no_python_abi_suffix' when installing stable version of pytorch with sam2
|
rbavery
|
open
|
[
"high priority",
"module: build",
"module: cpp-extensions",
"triaged",
"has workaround"
] | 7
|
NONE
|
### 🐛 Describe the bug
Installing sam2, which requires a build of a cpp extension and pytorch 2.6 used to work. today this now doesn't but nothing has changed about my environment
```
> [wherobots-inference-gpu 13/14] RUN uv pip install -e dockers/gpu/. --extra-index-url https://download.pytorch.org/whl/cu124 --index-strategy unsafe-best-match && uv pip install pytest memray:
64.45 "/home/wherobots/.cache/uv/builds-v0/.tmpWPVtRE/lib/python3.11/site-packages/torch/utils/cpp_extension.py",
64.45 line 520, in __init__
64.45 super().__init__(*args, **kwargs)
64.45 TypeError: Command.__init__() got an unexpected keyword argument
64.45 'no_python_abi_suffix'
64.45
64.45 hint: This usually indicates a problem with the package or the build
64.45 environment.
64.45 help: `sam2` (v1.1.0) was included because `wherobots-inference-gpu`
64.45 (v1.7.0) depends on `sam2`
```
It looks like this is being addressed in https://github.com/pytorch/pytorch/pull/149636 Hoping this can get merged soon!
### Versions
no environment can be installed with sam2 and pytorch 2.6
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @xmfan
| true
|
2,936,497,496
|
[StaticCudaLauncher] Support sharedMemBytes > 48KB
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150108
* #150107
* #149054
* __->__ #149657
Triton does some special handling when requesting more than 48 KB of shared memory: specifically it queries the device for maximum device memory, then sets the maximum amount of dynamic memory to be the difference between static and dynamic memory.
See corresponding implementation in triton land here:
https://github.com/triton-lang/triton/blob/main/third_party/nvidia/backend/driver.c#L128-L143
Test plan:
- New unit test requesting more than 48 KB of memory
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,936,491,507
|
[WIP] avoid speicializing sym_max and sym_min
|
laithsakka
|
closed
|
[
"release notes: fx",
"fx",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149656
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,936,437,864
|
elif is not a cmake keyword
|
atupone
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
Test for pocketfft_header not in its place is wrong
| true
|
2,936,417,727
|
Make sure to write to caches atomically
|
aorenste
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
This is an attempt to fix #119698
I was unable to reproduce the original described problem on the latest trunk but the proposed fix makes sense. Instead of adding locks like the original (unlanded) fix I changed a few of the cache writes to be atomic file swaps (write to temp file, rename file) which should have the same effect without blocking reads.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149654
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,936,398,244
|
Compile generating empty cudagraphs when generated graph has no compute
|
HDCharles
|
open
|
[
"triaged",
"module: cuda graphs",
"oncall: pt2"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Compile seems to generate empty cudagraphs when the graph has no compute, this can create a deluge of warning messages for a multi-layer model. compile should avoid cudagraphing empty graphs.
tlparse:
https://gist.github.com/HDCharles/c0d418e1307d9f5248b359b2ffa25427
repro:
https://gist.github.com/HDCharles/ec4b8b0c0853b2d3b0204a6441d1a526
### Error logs
UserWarning: The CUDA Graph is empty. This usually means that the graph was attempted to be captured on wrong device or stream. (Triggered internally at /pytorch/aten/src/ATen/cuda/CUDAGraph.cpp:206.)
from tlparse: this is the compute-free graph thats causing the issue:
```
class GraphModule(torch.nn.Module):
def forward(self, L_stack0_: "bf16[6, 4][4, 1]cuda:0"):
l_stack0_ = L_stack0_
# File: /data/users/cdhernandez/gpt-fast/mixtral-moe/even_smaller_moe_repro.py:32 in torch_dynamo_resume_in_forward_at_31, code: return out.reshape(batch_size, -1, self.dim)
reshape: "bf16[1, 6, 4][24, 4, 1]cuda:0" = l_stack0_.reshape(1, -1, 4); l_stack0_ = None
return (reshape,)
```
### Versions
❯ python3 collect_env.py
Collecting environment information...
PyTorch version: 2.7.0.dev20250304+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.34
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk20_zion_2830_g3e5ab162667d-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 535.154.05
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.6.0
/usr/lib64/libcudnn_adv.so.9.6.0
/usr/lib64/libcudnn_cnn.so.9.6.0
/usr/lib64/libcudnn_engines_precompiled.so.9.6.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib64/libcudnn_graph.so.9.6.0
/usr/lib64/libcudnn_heuristic.so.9.6.0
/usr/lib64/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 84%
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4792.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250304+cu126
[pip3] torchao==0.10.0+gitcae1cceb
[pip3] torchaudio==2.6.0.dev20250304+cu126
[pip3] torchdata==0.11.0
[pip3] torchtune==0.0.0
[pip3] torchvision==0.22.0.dev20250304+cu126
[conda] magma-cuda126 2.6.1 1 pytorch
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250304+cu126 pypi_0 pypi
[conda] torchao 0.10.0+gitcae1cceb dev_0 <develop>
[conda] torchaudio 2.6.0.dev20250304+cu126 pypi_0 pypi
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchtune 0.0.0 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250304+cu126 pypi_0 pypi
cc @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @chauhang
| true
|
2,936,392,211
|
partitioner: ensure collectives saved by SAC that are actually unused in the bw are properly not saved
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: autograd",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
This PR fixes one of the issues described here: https://github.com/pytorch/torchtitan/issues/866#issuecomment-2726015248
I spent some time trying to write a unit test and ultimately failed. If folks are interested I can spend more time trying to, but otherwise I have an E2E test with torchtitan. command:
```
CUDA_VISIBLE_DEVICES=1,2,3,4 NGPU=4 CONFIG_FILE="./torchtitan/models/llama/train_configs/llama3_8b.toml" tlp ./run_train.sh --training.steps=30 --training.tensor_parallel_degree=2 --training.compile --experimental.enable_async_tensor_parallel
```
here's the backward graph generated prior to the PR: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/hirsheybar/f7d17388-42c2-4d7e-8a55-a00387341ecb/custom/rank_0/-_0_0_0/aot_backward_graph_9.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
and new backward graph with the PR: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/hirsheybar/ab8576fc-98c1-4915-af47-699aa8e2557e/custom/rank_0/-_0_0_0/aot_backward_graph_9.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000
The main difference is that the input arg `reduce_scatter_tensor_1` is dead code in the bw graph, causing us to unnecessarily save a giant `reduce_scatter` for bw. With the PR, we properly ensure that it is not saved for backward.
More comments in the PR, but the main thing going on is that:
(1) We have some existing logic that checks for activations that are actually dead code in the backward, and removes them
(2) collectives are not properly handled by this code. Why? collective are **always** followed by `wait_tensor()` call. So we need to go one node further and check if the "dead" code has a wait_tensor user that is also dead
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149411
* __->__ #149652
* #149514
| true
|
2,936,380,553
|
[ca] fix accumulate grad polyfill when different strides between param and grad
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149784
* #149773
* __->__ #149651
* #149709
* #149647
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,936,380,445
|
[aot] maybe mark activations as dynamic
|
xmfan
|
closed
|
[
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149651
* __->__ #149650
* #149649
* #149647
* #149229
| true
|
2,936,380,317
|
[ca] torch.compile API comments and support older dynamic shapes API used in benchmarks
|
xmfan
|
closed
|
[
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149651
* #149650
* __->__ #149649
* #149647
* #149229
| true
|
2,936,380,284
|
[WIP][dynamic shapes] size-oblivious rewrite for infer_size, contiguity
|
pianpwk
|
closed
|
[
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,936,380,216
|
[ca] use torch.compile ca API for benchmarks
|
xmfan
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149784
* #149773
* #149651
* #149709
* __->__ #149647
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,347,850
|
[ONNX] Support running bfloat16 models with ONNX Runtime
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 15
|
COLLABORATOR
|
Use ORTValue objects to support bfloat16 and other dtypes as inputs. This only supports cuda as ort only implements bfloat16 on cuda.
| true
|
2,936,311,289
|
[torch/c10d] change class variable from private to protected (#149579)
|
GirasoleY
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 9
|
CONTRIBUTOR
|
Summary:
Change class variable from private to protected in ProcessGroupNCCL
Test Plan: Existing UT Pass.
Reviewed By: kingchc, kwen2501
Differential Revision: D71373067
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,936,305,859
|
op should NOT be static in aoti_torch_call_dispatcher
|
pytorchbot
|
closed
|
[
"open source",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
aoti_torch_call_dispatcher is meant to call different ops, so the op must not be static. Otherwise, every call to this API will call the first op that was ever called, which is not the intended behavior of any human being.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149230
* #149052
* __->__ #149208
| true
|
2,936,305,659
|
Remove `torch.utils` from `MOD_SKIPLIST`
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149748
* __->__ #149643
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,294,974
|
[ca] torch.compile API comments and support older dynamic shapes API used in benchmarks
|
xmfan
|
closed
|
[
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149367
* #148516
* __->__ #149642
* #149641
* #149229
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,294,877
|
[ca] use torch.compile ca API for benchmarks
|
xmfan
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149367
* #148516
* #149642
* __->__ #149641
* #149229
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,936,294,265
|
torch.distributed.checkpoint CUDA OOM with broadcast_from_rank0
|
nikonikolov
|
open
|
[
"oncall: distributed",
"module: cuda",
"triaged",
"module: fsdp"
] | 2
|
CONTRIBUTOR
|
I am trying to load an FSDP checkpoint by broadcasting weights from rank 0. The model is already correctly set up on GPU on each rank. I use
```python
model_state_dict = torch.distributed.checkpoint.state_dict.set_model_state_dict(
model=self._model,
model_state_dict=model_state_dict,
options=torch.distributed.checkpoint.state_dict.StateDictOptions(
full_state_dict=True,
cpu_offload=True,
ignore_frozen_params=False,
broadcast_from_rank0=True,
),
)
```
When this call starts executing, I can see the CUDA memory on each GPU rapidly rising and from ~20GB → 40GB of memory per GPU on nvidia-smi. Eventually it fails with CUDA OOM (see stack trace below). When I set `broadcast_from_rank0=False`, it works fine. This is observed with both FSDP1 and FSDP2. `model_state_dict` is empty on all ranks except rank 0
```
Traceback (most recent call last):
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/components/scripts/train/train.py", line 19, i
n <module>
main()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/components/scripts/train/train.py", line 15, i
n main
TrainJob().run(config)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/components/scripts/train/train_job.py", line 1
8, in run
self.run_trainer(config)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/components/scripts/train/train_job.py", line 1
18, in run_trainer
trainer = Trainer(config=config)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/trainer/trainer.py", line 102, in __init_
_
self._maybe_restore_checkpoint()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/trainer/trainer.py", line 124, in _maybe_
restore_checkpoint
self.load_state_dict(state_dict)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/common/state_dict.py", line 96, in load_s
tate_dict
load_state_dict_method(value)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/common/state_dict.py", line 82, in load_s
tate_dict
self._load_custom_state_dict(state_dict)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/barrel/barrel/core/trainer/training_module.py", line 164, in
_load_custom_state_dict
model_state_dict = torch.distributed.checkpoint.state_dict.set_model_state_dict(
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/checkpoint/st
ate_dict.py", line 1184, in set_model_state_dict
return _load_model_state_dict(model, model_state_dict, info)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/checkpoint/st
ate_dict.py", line 566, in _load_model_state_dict
_state_dict_fn(model, "load_state_dict")(
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2201, in load_state_dict
load(self, state_dict)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2189, in load
load(child, child_state_dict, child_prefix) # noqa: F821
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2189, in load
load(child, child_state_dict, child_prefix) # noqa: F821
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2189, in load
load(child, child_state_dict, child_prefix) # noqa: F821
[Previous line repeated 2 more times]
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2183, in load
module._load_from_state_dict(
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 2034, in _load_from_state_dict
hook(state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/nn/modules/module.py", li
ne 73, in __call__
return self.hook(module, *args, **kwargs)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/utils/_contextlib.py", li
ne 116, in decorate_context
return func(*args, **kwargs)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_state_d
ict_utils.py", line 849, in _pre_load_state_dict_hook
_pre_load_state_dict_hook_fn[fsdp_state._state_dict_type](
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_state_d
ict_utils.py", line 371, in _full_pre_load_state_dict_hook
_enter_unshard_params_ctx(module, fsdp_state, writeback=True)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_state_d
ict_utils.py", line 139, in _enter_unshard_params_ctx
fsdp_state._unshard_params_ctx[module].__enter__()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/external/python_runtime_x86_64-unknown-linux-gnu/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_unshard
_param_utils.py", line 197, in _unshard_fsdp_state_params
_unshard(state, handle, computation_stream, computation_stream)
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_runtime
_utils.py", line 300, in _unshard
handle.unshard()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_flat_pa
ram.py", line 1310, in unshard
unsharded_flat_param = self._alloc_padded_unsharded_flat_param()
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/fsdp/_flat_pa
ram.py", line 1337, in _alloc_padded_unsharded_flat_param
_alloc_storage(unsharded_flat_param, flat_param._padded_unsharded_size) # type: ignore[attr-defined]
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/distributed/utils.py", li
ne 186, in _alloc_storage
tensor._typed_storage()._resize_(size.numel())
File "/scratch/nikolay_nikolov/.cache/bazel/_bazel_nikolay_nikolov/79bf5e678fbb2019f1e30944a206f079/execroot/barrel/bazel-out/k8-opt/bin/barrel/pipes/vlams/torchrun.runfiles/pip-core_torch/site-packages/torch/storage.py", line 1027, i
n _resize_
self._untyped_storage.resize_(size * self._element_size())
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.16 GiB. GPU 0 has a total capacity of 39.38 GiB of which 547.38 MiB is free. Including non-PyTorch memory, this process has 38.84 GiB memory in use. Of the allocated memory
35.91 GiB is allocated by PyTorch, and 363.02 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentati
on for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
```
Related to:
- https://discuss.pytorch.org/t/torch-distributed-checkpoint-cuda-oom-with-broadcast-from-rank0/209240
- https://github.com/pytorch/pytorch/issues/148756#issuecomment-2722593066
Environment:
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.13 (main, Oct 3 2023, 01:22:22) [Clang 17.0.1 ] (64-bit runtime)
Python platform: Linux-5.15.0-1048-oracle-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy==1.7.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1+cu124
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
```
Same issue observed with torch 2.6.0
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @ptrblck @msaroufim @eqy @jerryzh168 @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o @LucasLLC @pradeepfn
| true
|
2,936,267,519
|
Pylint error: ` torch.linalg.vector_norm is not callable`
|
adosar
|
open
|
[
"module: typing",
"module: lint",
"triaged",
"actionable"
] | 3
|
NONE
|
### 🐛 Describe the bug
```python
# test.py
import torch
if __name__ == "__main__":
t = torch.linalg.vector_norm(torch.randn(32, 4))
```
Pylint throws the following error:
```
************* Module test
test.py:1:0: C0114: Missing module docstring (missing-module-docstring)
test.py:4:8: E1102: torch.linalg.vector_norm is not callable (not-callable)
-----------------------------------
Your code has been rated at 0.00/10
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Fedora Linux 39 (Workstation Edition) (x86_64)
GCC version: (GCC) 13.3.1 20240913 (Red Hat 13.3.1-3)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.38
Python version: 3.12.7 (main, Oct 1 2024, 00:00:00) [GCC 13.3.1 20240913 (Red Hat 13.3.1-3)] (64-bit runtime)
Python platform: Linux-6.11.9-100.fc39.x86_64-x86_64-with-glibc2.38
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
CPU family: 6
Model: 142
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU(s) scaling MHz: 32%
CPU max MHz: 3400.0000
CPU min MHz: 400.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 6 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.2.0.post0
[pip3] torch==2.5.1
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.20.1
[pip3] torchviz==0.0.2
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @ezyang @malfet @xuzhao9 @gramster
| true
|
2,936,265,115
|
[Release/2.6] Pin requirements
|
ethanwee1
|
closed
|
[
"oncall: distributed",
"module: rocm",
"module: cpu",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
Validation:
http://rocm-ci.amd.com/job/pytorch2.6-manylinux-wheels_rel-6.4-preview/15/
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,936,169,456
|
Fix is_nonzero for more than one elem tensors
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149637
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
Differential Revision: [D71560442](https://our.internmc.facebook.com/intern/diff/D71560442)
| true
|
2,936,155,389
|
Remove custom kwargs before calling BuildExtension.__init__(...)
|
janeyx99
|
open
|
[] | 6
|
CONTRIBUTOR
|
Remove custom kwargs before calling `BuildExtension.__init__(...)`
This should fix what is going on in https://fb.workplace.com/chat/t/100068823519463#:~:text=https%3A//github.com/pytorch/rl/actions/runs/13974012630/job/39123001095
cc @vmoens
| true
|
2,936,147,432
|
avoid guarding on max() unnecessarily
|
bdhirsh
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"vllm-compile"
] | 7
|
CONTRIBUTOR
|
here's a repro. theoretically the code below should not require a recompile. We are conditionally padding, producing an output tensor of shape max(input_size, 16). Instead though, we specialize on the pad value, and produce separate graphs for the `size_16` and `size_greater_than_16` cases
```
import torch
@torch.compile(backend="eager")
def f(x):
padded_size = max(x.shape[0], 16)
padded_tensor = torch.ones(padded_size, *x.shape[1:])
return padded_tensor + x.sum()
x = torch.arange(15)
torch._dynamo.mark_dynamic(x, 0)
out = f(x)
x = torch.arange(17)
torch._dynamo.mark_dynamic(x, 0)
out = f(x)
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @zou3519
| true
|
2,936,130,645
|
[ONNX] Improve onnx ops docs
|
justinchuby
|
closed
|
[
"module: onnx",
"triaged"
] | 0
|
COLLABORATOR
|
https://pytorch.org/docs/main/onnx_ops.html
Improve example to show the onnx op being used with torch ops.
| true
|
2,936,122,813
|
DRAFT: HasData
|
rec
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149633
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.