id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,743,175,297
|
[dynamo, guards] Move SHAPE_ENV guard to C++
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
MEMBER
|
Followup to https://github.com/pytorch/pytorch/pull/140063.
> Rewrite the SHAPE_ENV guard into C++ - it is a fairly common guard that results in FrameLocalsMapping needing to convert to a dict
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,743,172,941
|
[dynamo, guards] Implement FrameLocalsMapping version of check_verbose_nopybind
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
MEMBER
|
Follow up to https://github.com/pytorch/pytorch/pull/140063.
> Add FrameLocalsMapping version for check_verbose_nopybind in order to match behavior between check_nopybind and check_verbose_nopybind. This can prevent difficult debugging situations where guards fail (check_nopybind returns false) but no guard error message is generated (check_verbose_nopybind succeeds).
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,743,131,846
|
easy: sort dictionary keys for inductor config when publishing
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143317
* __->__ #143307
This means we should get consistent logging strings for the same
config on different ranks
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,743,128,114
|
Add CPU scalar support in addcdiv
|
EmmettBicker
|
open
|
[
"triaged",
"enhancement",
"actionable",
"module: python frontend"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Continuation of #143264 .
Allow user to pass in a cpu scalar to addcdiv. I can do this as soon as the mentioned PR is merged!
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD
| true
|
2,743,093,359
|
[C10D] Update docs for wait()
|
wconstab
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143305
Clarify that currently active stream, not default stream, is the one
that will be blocked by a call to wait(), and also point out that the
CPU is not blocked by the call for CUDA/nccl collectives.
| true
|
2,743,085,099
|
[compiled autograd] Proxy a node for CopyBackwards into the graph
|
zou3519
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144115
* #143417
* #143405
* #143387
* __->__ #143304
* #143296
CopyBackwards is a manual C++ torch::autograd::Node; we update its
apply_with_saved to proxy a functional version of it into the graph instead
of inlining into it.
Test Plan:
- existing tests
| true
|
2,743,005,800
|
update non strict cond tests
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143303
Differential Revision: [D67285992](https://our.internmc.facebook.com/intern/diff/D67285992/)
| true
|
2,742,990,248
|
Triton bump for 3.2 cherry-picks (mmav3 segfault fix, gfx950 support)
|
bertmaher
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"rocm",
"ciflow/rocm"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143302
* https://github.com/triton-lang/triton/pull/5277
* https://github.com/triton-lang/triton/pull/5084
| true
|
2,742,985,940
|
Fix a misspelling [ONNX]
|
xadupre
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
| null | true
|
2,742,956,575
|
[BE] Revert "Add conda to Manylinux Docker images (#139903)"
|
atalman
|
closed
|
[
"Merged",
"Reverted",
"Stale",
"topic: not user facing",
"ci-no-td"
] | 7
|
CONTRIBUTOR
|
This reverts commit 56a40d4ebb0bcf733f1ea5f6efde805326a7a565.
Having conda in manylinux builder images is not required. This was added to have manylinux-builder images as the only images for CD builds after conda-builder is deprecated. However we decided to start using ``almalinux-builder``.
We are using almalinux-builder for linux_job_v2 which contains conda: https://github.com/pytorch/test-infra/blob/main/.github/workflows/linux_job_v2.yml#L114
| true
|
2,742,950,386
|
[FlexAttention] Allow num_warps 8 since when block size >=128
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"module: flex attention"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143103
* #143344
* __->__ #143299
# Summary
Fixes #143290
We already strip bad configs here: https://github.com/pytorch/pytorch/blob/e0e763e33135d2ad25c613007aa5f2fee6d2cc24/torch/_inductor/kernel/flex_attention.py#L2299
So this shouldn't be needed. Confirming that the 64 x 128 case is valid otherwise we can just change the default config
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng
| true
|
2,742,949,945
|
non strict sequential slicing
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143298
Differential Revision: [D67284841](https://our.internmc.facebook.com/intern/diff/D67284841/)
| true
|
2,742,933,107
|
[FSDP2] Clamp `reduce_dtype` in lazy init
|
awgu
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (fsdp2)"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143297
fixes https://github.com/pytorch/pytorch/issues/143277 by moving the clamp of `reduce_dtype` to `None` to lazy init (same place as where `param_dtype` can be clamped to `None`)
cc @H-Huang @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,742,816,793
|
[compiled autograd] Proxy opaque nodes for built-in autograd nodes
|
zou3519
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"module: compiled autograd",
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144115
* #143417
* #143405
* #143387
* #143304
* __->__ #143296
This PR is on the way to getting compiled autograd's initial capture to
stop specializing on Tensor metadata.
This PR changes compiled autograd's initial capture to proxy an opaque
(w.r.t. Dynamo) function into the graph for all built-in codegen'ed
autograd nodes and validate_outputs.
We changed each codegen'ed apply_with_saved (e.g.
MulBackward0::apply_with_saved) to call into Python to proxy a function
(compiled_autograd.ops.MulBackward0) into the graph. Then, we use the
node's InputMetadata to "guess" at the properties of the output Tensors
to create some new FakeTensors.
Some details:
- MulBackward0::apply_with_saved lives in libtorch_cpu, but needs to be
call to Python via libtorch_python. There is an indirection
(PyCompilerInterface) to do this.
- MulBackward0::apply_with_saved passes a C++ function to Python. To make
our lives easier, every codegen'ed apply_with_saved passes a C++
function with the same signature
`(variable_list, ivalue_list) -> variable_list`.
- We define how to pack arbitrary C++ types into IValue via a helper
IValuePacker struct and codegen functional variants of each builtin
C++ autograd node (e.g. MulBackward0_apply_functional_ivalue).
MulBackward0 before this PR:
https://gist.github.com/zou3519/a80381d5fa38e970e413fcd91b0530de
MulBackward0 after this PR:
https://gist.github.com/zou3519/0c2eee8b3d8d96232b51ef430b53c5b0
Test Plan:
- existing tests
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @xmfan
| true
|
2,742,801,156
|
`torch.Tensor.angle()` produces inconsistent results on CPU, only on Linux
|
Uncomfy
|
closed
|
[] | 3
|
NONE
|
### 🐛 Describe the bug
Hello!
`torch.Tensor.angle()` produces inconsistent results depending on the order of operations. Specifically:
1. Computing the angle for the entire tensor and then indexing into the result gives different values compared to first indexing the tensor and then computing the angle.
2. Similarly, concatenating two tensors before computing the angle produces different results compared to computing the angle for each tensor individually and then concatenating the results.
This only happens on CPU on Linux, it works completely fine on Windows or when tensor is on GPU.
```python3
import torch
print("Single element comparison")
complex_tensor = torch.complex(
torch.arange(10, dtype=torch.float32),
torch.arange(10, dtype=torch.float32),
)
for i in range(complex_tensor.size(0)):
if complex_tensor.angle()[i] != complex_tensor[i].angle():
print("Mismatch at pos", i)
# Output:
# Mismatch at pos 1
# Mismatch at pos 2
# Mismatch at pos 3
# Mismatch at pos 4
# Mismatch at pos 5
# Mismatch at pos 6
# Mismatch at pos 7
print()
########################################################
print("Concatenate before and after `angle()`")
complex_tensor_1 = torch.complex(
torch.arange(0, 5, dtype=torch.float32),
torch.arange(0, 5, dtype=torch.float32),
)
complex_tensor_2 = torch.complex(
torch.arange(5, 10, dtype=torch.float32),
torch.arange(5, 10, dtype=torch.float32),
)
concat_before_angle = torch.cat([complex_tensor_1, complex_tensor_2], dim=0).angle()
concat_after_angle = torch.cat([complex_tensor_1.angle(), complex_tensor_2.angle()], dim=0)
print("Results are identical:", (concat_before_angle == concat_after_angle).all().item())
# Output:
# Results are identical: False
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 13
BogoMIPS: 5184.01
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves md_clear flush_l1d arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1.5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Mitigation; TSX disabled
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
| true
|
2,742,673,333
|
update aten bmm CK heuristic
|
bradleyhd
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary: updates heuristic to use new instances based on ck profiling of LLM shapes
Differential Revision: D67280269
| true
|
2,742,673,023
|
`bias=False` fails in `Transformer` when `batch_first=True` and in eval mode
|
aamster
|
open
|
[
"module: nn",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
```
import torch
from torch import nn
transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12, bias=False, batch_first=True)
src = torch.rand((10, 32, 512))
tgt = torch.rand((10, 32, 512))
transformer_model.eval()
out = transformer_model(src, tgt)
```
```
Traceback (most recent call last):
File "/Users/adam.amster/Library/Application Support/JetBrains/PyCharm2024.3/scratches/scratch_35.py", line 9, in <module>
out = transformer_model(src, tgt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 206, in forward
memory = self.encoder(src, mask=src_mask, src_key_padding_mask=src_key_padding_mask,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 391, in forward
output = mod(output, src_mask=mask, is_causal=is_causal, src_key_padding_mask=src_key_padding_mask_for_layers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 676, in forward
elif not all((x.device.type in _supported_device_type) for x in tensor_args):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 676, in <genexpr>
elif not all((x.device.type in _supported_device_type) for x in tensor_args):
^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'
```
note that it works fine in train mode
### Versions
```
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (x86_64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.24.0
Libc version: N/A
Python version: 3.11.9 (main, Apr 2 2024, 08:25:04) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime)
Python platform: macOS-14.5-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.2.2
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.4.0.post0
[pip3] torchtext==0.17.2
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,742,645,100
|
[CD] Fix XPU linux CD whl test failure
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Follow https://github.com/pytorch/pytorch/pull/142482, refer the original fix PR https://github.com/pytorch/pytorch/pull/130742 and new issue in https://github.com/pytorch/pytorch/actions/runs/12323126436/job/34403681230
Works for https://github.com/pytorch/pytorch/issues/114850
| true
|
2,742,612,823
|
[2/N][Memory Profiling] Record memory allocation/free
|
mzzchy
|
closed
|
[
"fb-exported",
"Stale",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143291
Design Doc: https://fburl.com/gdoc/47zpuweb
Prototyping: D66469341
In this diff, we implement the logic to record, store and export memory trace which will be involved by mtia hooks later.
* Add RingBuffer<MTIATraceEntry> to mtia_allocator to store trace
* Implement record_trace() to add trace entry for allocation and free
* Add record_histroy_ as enablement flag of profiler and record_histroy() to toggle the state
To avoid the duplicate symbol error, we remove the python/combined_traceback from srcs of C_impl_cuda and add libtorch_memory_profiler to dependency.
Differential Revision: [D66776251](https://our.internmc.facebook.com/intern/diff/D66776251/)
| true
|
2,742,566,284
|
FlexAttention: BFloat16 training is not working on nightly
|
ViktorooReps
|
closed
|
[
"high priority",
"triage review",
"module: bfloat16",
"oncall: pt2",
"module: flex attention"
] | 6
|
NONE
|
### 🐛 Describe the bug
Minimal code to reproduce:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
flex_attention = torch.compile(flex_attention)
x = torch.randn(
(1, 8, 256, 128),
device='cuda',
dtype=torch.float,
requires_grad=True
)
flex_attention(x, x, x).sum().backward()
```
Change `dtype` to `torch.bfloat16` to encounter the following error:
```
Traceback (most recent call last):
File "/mloscratch/homes/shcherba/dl-char-llm/test.py", line 15, in <module>
flex_attention(x, x, x).sum().backward()
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1958, in backward
return impl_fn()
^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1944, in impl_fn
out = CompiledFunction._backward_impl(ctx, all_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2051, in _backward_impl
CompiledFunction.compiled_bw = aot_config.bw_compiler(
^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 54, in _wrapped_bw_compiler
return disable(disable(bw_compiler_fn)(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 744, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1825, in bw_compiler
return inner_compile(
^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 572, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 676, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile
graph.run(*example_inputs)
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/graph.py", line 859, in run
return super().run(*args)
^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1500, in run_node
result = super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1147, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/graph.py", line 1137, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/lowering.py", line 451, in wrapped
out = decomp_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/kernel/flex_attention.py", line 2398, in flex_attention_backward
broadcasted_grad_key = autotune_select_algorithm(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/select_algorithm.py", line 2210, in autotune_select_algorithm
return _ALGORITHM_SELECTOR_CACHE(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mloscratch/homes/shcherba/conda/envs/char-llm/lib/python3.11/site-packages/torch/_inductor/select_algorithm.py", line 1658, in __call__
raise NoValidChoicesError(
torch._inductor.exc.LoweringException: NoValidChoicesError: No choices to select, please consider adding ATEN into max_autotune_gemm_backends config (defined in torch/_inductor/config.py) to allow at least one choice.
target: flex_attention_backward
args[0]: TensorBox(StorageBox(
InputBuffer(name='primals_1', layout=FixedLayout('cuda:0', torch.bfloat16, size=[1, 8, 256, 128], stride=[262144, 32768, 128, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='primals_1', layout=FixedLayout('cuda:0', torch.bfloat16, size=[1, 8, 256, 128], stride=[262144, 32768, 128, 1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='primals_1', layout=FixedLayout('cuda:0', torch.bfloat16, size=[1, 8, 256, 128], stride=[262144, 32768, 128, 1]))
))
args[3]: TensorBox(StorageBox(
InputBuffer(name='getitem_2', layout=FixedLayout('cuda:0', torch.bfloat16, size=[1, 8, 256, 128], stride=[262144, 32768, 128, 1]))
))
args[4]: TensorBox(StorageBox(
DonatedBuffer(name='getitem_3', layout=FixedLayout('cuda:0', torch.float32, size=[1, 8, 256], stride=[2048, 256, 1]))
))
args[5]: TensorBox(StorageBox(
InputBuffer(name='tangents_1', layout=FixedLayout('cuda:0', torch.bfloat16, size=[1, 8, 256, 128], stride=[262144, 32768, 128, 1]))
))
args[6]: TensorBox(StorageBox(
Pointwise(
'cuda',
torch.float32,
def inner_fn(index):
_, i1, i2 = index
tmp0 = ops.constant(0, torch.float32)
return tmp0
,
ranges=[1, 8, 256],
origin_node=full_default_4,
origins=OrderedSet([full_default_4])
)
))
args[7]: Subgraph(name='fw_graph0', graph_module=<lambda>(), graph=None)
args[8]: Subgraph(name='joint_graph0', graph_module=<lambda>(), graph=None)
args[9]: (1, 1, TensorBox(StorageBox(
DonatedBuffer(name='full', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
DonatedBuffer(name='full_default', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]))
)), None, None, TensorBox(StorageBox(
DonatedBuffer(name='convert_element_type', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]))
)), TensorBox(StorageBox(
DonatedBuffer(name='convert_element_type_1', layout=FixedLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]))
)), None, None, 1073741824, 1073741824, Subgraph(name='mask_graph0', graph_module=<lambda>(), graph=None))
args[10]: 0.08838834764831843
args[11]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'WRITE_DQ': True, 'OUTPUT_LOGSUMEXP': True}
args[12]: ()
args[13]: ()
```
### Versions
PyTorch version: 2.6.0.dev20241216+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (conda-forge gcc 12.1.0-17) 12.1.0
Clang version: Could not collect
CMake version: version 3.30.3
Libc version: glibc-2.35
Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.12
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9454 48-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3810.7910
CPU min MHz: 1500.0000
BogoMIPS: 5491.58
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241216+cu124
[pip3] torchaudio==2.6.0.dev20241216+cu124
[pip3] torchvision==0.22.0.dev20241216+cu124
[pip3] triton==3.1.0
[conda] numpy 2.1.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241216+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241216+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241216+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @Chillee @drisspg @yanboliang @BoyuanFeng @ydwu4 @bdhirsh @yf225
| true
|
2,742,536,710
|
RFC: Dynamically Quantized 4 bit matmul API and usage
|
nikhil-arm
|
open
|
[
"oncall: quantization"
] | 6
|
COLLABORATOR
|
# 4-Bit Dynamically Quantized Matrix Multiplication in PyTorch
This RFC introduces two new operations to enable efficient 4-bit weight quantization and matrix multiplication in PyTorch. These operations provide a mechanism for low-precision arithmetic to be used for both training and inference, improving performance and reducing memory usage. The two new operations are:
- `torch.ops.aten._dyn_quant_pack_4bit_weight` - Packs the quantized weights, scales, and bias for a Linear layer into a compact format using 4-bit symmetric quantization.
- `torch.ops.aten._dyn_quant_matmul_4bit` - Performs matrix multiplication using quantized weights, optimized for 4-bit precision.
## 1. `torch.ops.aten._dyn_quant_pack_4bit_weight`
This operation is used to pack the quantized weights, scales, and optional bias for a Linear layer. The function expects 4-bit quantized weights and returns a packed representation.
### **Parameters:**
- **`weight`** (`Tensor`): The original weights of the Linear layer.
- **`scales_and_zeros`** (`Tensor`): A tensor containing the quantization scales for each group. The tensor has the shape `[num_groups]`.
- **`bias`** (`Tensor`, optional): The bias tensor for the Linear layer. This parameter is optional.
- **`groupsize`** (`int`): The number of channels per group. The value must be a multiple of 32 or equal to `in_features`.
- **`in_features`** (`int`): The number of input features (the size of the input tensor).
- **`out_features`** (`int`): The number of output features (the size of the output tensor).
### **Returns:**
A tensor representing the packed 4-bit weights and scales, which can be passed to the matrix multiplication operation.
---
## 2. `torch.ops.aten._dyn_quant_matmul_4bit`
This operation performs matrix multiplication using the quantized weights in 4-bit precision, optimized for efficient execution.
### **Parameters:**
- **`input`** (`Tensor`): The input tensor for the matrix multiplication, typically with shape `[batch_size, in_features]`.
- **`packed_weights`** (`Tensor`): The packed 4-bit weights, returned by `torch.ops.aten._dyn_quant_pack_4bit_weight`.
- **`groupsize`** (`int`): The number of channels per group. The value must be a multiple of 32 or equal to `in_features`.
- **`in_features`** (`int`): The number of input features (same as the Linear layer's `in_features`).
- **`out_features`** (`int`): The number of output features (same as the Linear layer's `out_features`).
### **Returns:**
A tensor representing the result of the matrix multiplication, with shape `[batch_size, out_features]`.
---
## API Usage Example
In Below Comment there is an example of how to use these operations for quantization, execution, and benchmarking:
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,742,426,577
|
EXCEPTION : /python3.11/distutils/core.py
|
tanwarsh
|
open
|
[
"needs reproduction",
"triaged",
"module: third_party",
"oncall: pt2"
] | 6
|
NONE
|
### 🐛 Describe the bug
Facing the same issue with python 3.10 and 3.11 as well with latest torch versions
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
```
from torchvision import datasets
```
```
File "/my_workspace/src/dataloader.py", line 7, in <module>
from torchvision import datasets
File "/lib/python3.11/site-packages/torchvision/__init__.py", line 10, in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/torchvision/models/__init__.py", line 2, in <module>
from .convnext import *
File "/lib/python3.11/site-packages/torchvision/models/convnext.py", line 8, in <module>
from ..ops.misc import Conv2dNormActivation, Permute
File "/lib/python3.11/site-packages/torchvision/ops/__init__.py", line 23, in <module>
from .poolers import MultiScaleRoIAlign
File "/lib/python3.11/site-packages/torchvision/ops/poolers.py", line 10, in <module>
from .roi_align import roi_align
File "/lib/python3.11/site-packages/torchvision/ops/roi_align.py", line 7, in <module>
from torch._dynamo.utils import is_compile_supported
File "/lib/python3.11/site-packages/torch/_dynamo/__init__.py", line 3, in <module>
from . import convert_frame, eval_frame, resume_execution
File "/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 31, in <module>
from torch._dynamo.utils import CompileTimeInstructionCounter
File "/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1320, in <module>
if has_triton_package():
^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/torch/utils/_triton.py", line 9, in has_triton_package
from triton.compiler.compiler import triton_key
File "/lib/python3.11/site-packages/triton/__init__.py", line 8, in <module>
from .runtime import (
File "/lib/python3.11/site-packages/triton/runtime/__init__.py", line 1, in <module>
from .autotuner import (Autotuner, Config, Heuristics, autotune, heuristics)
File "/lib/python3.11/site-packages/triton/runtime/autotuner.py", line 9, in <module>
from ..testing import do_bench, do_bench_cudagraph
File "/lib/python3.11/site-packages/triton/testing.py", line 7, in <module>
from . import language as tl
File "/lib/python3.11/site-packages/triton/language/__init__.py", line 4, in <module>
from . import math
File "/lib/python3.11/site-packages/triton/language/math.py", line 1, in <module>
from . import core
File "/lib/python3.11/site-packages/triton/language/core.py", line 10, in <module>
from ..runtime.jit import jit
File "/lib/python3.11/site-packages/triton/runtime/jit.py", line 12, in <module>
from ..runtime.driver import driver
File "/lib/python3.11/site-packages/triton/runtime/driver.py", line 1, in <module>
from ..backends import backends
File "/lib/python3.11/site-packages/triton/backends/__init__.py", line 50, in <module>
backends = _discover_backends()
^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/triton/backends/__init__.py", line 44, in _discover_backends
driver = _load_module(name, os.path.join(root, name, 'driver.py'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/triton/backends/__init__.py", line 12, in _load_module
spec.loader.exec_module(module)
File "/lib/python3.11/site-packages/triton/backends/amd/driver.py", line 7, in <module>
from triton.runtime.build import _build
File "/lib/python3.11/site-packages/triton/runtime/build.py", line 8, in <module>
import setuptools
File "/lib/python3.11/site-packages/setuptools/__init__.py", line 8, in <module>
import _distutils_hack.override # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/_distutils_hack/override.py", line 1, in <module>
__import__('_distutils_hack').do_override()
File "/lib/python3.11/site-packages/_distutils_hack/__init__.py", line 77, in do_override
ensure_local_distutils()
File "/lib/python3.11/site-packages/_distutils_hack/__init__.py", line 64, in ensure_local_distutils
assert '_distutils' in core.__file__, core.__file__
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: /python3.11/distutils/core.py
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-192-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 45 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2992.968
BogoMIPS: 5985.93
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 8 MiB
L3 cache: 286 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Vulnerable, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu
| true
|
2,742,407,567
|
Masked self-attention not working as expected when each token is masking also itself
|
jacksalici
|
closed
|
[
"module: autograd",
"module: nn",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
I was developing a self-attentive module using `nn.MultiheadAttention` (MHA). My goal was to implement a causal mask that enforces each token to attend only to the tokens before itself, excluding itself, unlike the standard autoregressive causal masks where tokens can attend to themselves.
Here's the function to generate my custom causal mask:
```py
def generate_causal_mask(seq_length):
# Diagonal = 0, so each element attends only to elements before it, excluding itself
mask = torch.triu(torch.full((seq_length, seq_length), 1, dtype=torch.float32), diagonal=0).bool()
# Allow the first element to attend to itself to avoid NaN results
mask[0, 0] = False
return mask
```
The resulting mask looks like this:
```
tensor([[False, True, True, True, True, True, True, True],
[False, True, True, True, True, True, True, True],
[False, False, True, True, True, True, True, True],
[False, False, False, True, True, True, True, True],
[False, False, False, False, True, True, True, True],
[False, False, False, False, False, True, True, True],
[False, False, False, False, False, False, True, True],
[False, False, False, False, False, False, False, True]])
```
Here, `True` means "cannot attend." The first element attends to itself (`False` at position [0, 0]) to avoid NaN results.
The code to reproduce the issue:
```py
if __name__ == "__main__":
embed_dim = 16
batch_size = 1
seq_len = 8
mha = nn.MultiheadAttention(embed_dim, num_heads=1, batch_first=True)
x = torch.randn(batch_size, seq_len, embed_dim).requires_grad_(True)
causal_mask = generate_causal_mask(seq_len)
print(causal_mask)
output, _ = mha(x, x, x, attn_mask=causal_mask)
# Gradient of the output with respect to the token at position t
t = 5
loss = output[:, t].sum().backward()
print("Gradient of the token:")
print(x.grad)
```
### Observed Behavior
When printing the gradient of the input (x.grad) for token `t = 5`, I noticed that the output at time step `t = 5` depends on its own value. This is unexpected because, according to the causal mask, tokens should only attend to elements before themselves.
> tensor([[[ 1.7815e-02, 6.0239e-02, 4.4045e-02, -1.7005e-02, -1.2529e-01,
> -9.8527e-02, -2.5346e-02, 4.4857e-02, -9.7425e-02, 1.0793e-01,
> 1.4662e-01, 1.0073e-01, -9.0143e-02, -2.5913e-02, 1.3379e-03,
> -9.0163e-02],
> [ 2.6240e-01, 1.4095e-01, 2.9541e-01, 6.0876e-02, -1.5522e-01,
> -1.5531e-01, 4.4279e-02, 6.3482e-02, -2.1853e-01, 2.4059e-02,
> 2.2273e-01, 1.1566e-01, 6.6013e-02, -1.2247e-01, -1.1333e-01,
> -1.5512e-01],
> [ 5.3024e-02, 4.4725e-02, 6.7385e-02, 5.5258e-03, -6.8150e-02,
> -5.9587e-02, -1.4061e-04, 2.5825e-02, -7.0633e-02, 3.8935e-02,
> 8.7158e-02, 5.3142e-02, -1.6992e-02, -3.0389e-02, -2.0005e-02,
> -5.6871e-02],
> [ 2.9774e-01, 1.1942e-01, 3.1602e-01, 8.5978e-02, -8.4358e-02,
> -1.0587e-01, 7.2915e-02, 3.9608e-02, -1.8192e-01, -5.7704e-02,
> 1.4758e-01, 5.6968e-02, 1.5057e-01, -1.2490e-01, -1.3581e-01,
> -1.1233e-01],
> [ 1.1037e-01, 7.4862e-02, 1.3163e-01, 1.9109e-02, -1.0056e-01,
> -9.2370e-02, 9.9104e-03, 3.9165e-02, -1.1730e-01, 4.2791e-02,
> 1.3410e-01, 7.7194e-02, -1.3165e-03, -5.6924e-02, -4.4891e-02,
> -8.9721e-02],
> **[-6.6541e-02, -1.0303e-02, -3.5482e-02, 2.1983e-02, -5.1578e-02,
> 2.0161e-01, 7.2047e-02, -4.0216e-02, -1.7608e-02, -1.2176e-02,
> -5.2893e-02, -1.1424e-01, 4.6907e-03, -1.0784e-01, 5.8249e-02,
> 9.0503e-03],**
> [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
> 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
> 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
> 0.0000e+00],
> [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
> 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
> 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
> 0.0000e+00]]])
### Expected Behavior
Given the custom causal mask, the output at token `t` should depend only on tokens at earlier time steps (0 to `t-1`). It should not depend on itself, as the diagonal is masked.
### Request for Clarification
Is this behavior a bug in the MultiheadAttention implementation, or am I misunderstanding how `attn_mask` works? If this is intended behavior, could you please clarify how to correctly achieve the desired masking effect?
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.4 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.3
Libc version: N/A
Python version: 3.12.0 | packaged by conda-forge | (main, Oct 3 2023, 08:36:57) [Clang 15.0.7 ] (64-bit runtime)
Python platform: macOS-14.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[conda] numpy 2.1.3 py312h94ee1e1_0 conda-forge
[conda] pytorch 2.5.1 py3.12_0 pytorch
[conda] torchaudio 2.5.1 py312_cpu pytorch
[conda] torchvision 0.20.1 py312_cpu pytorch
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,742,363,305
|
[ROCm] ROCm-specific gemm tuning parameters
|
jataylo
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"release notes: rocm",
"module: inductor",
"ciflow/inductor",
"rocm",
"ciflow/rocm"
] | 11
|
COLLABORATOR
|
Adds tuning options for extra_args in mm_common.py on ROCm side we can supply specific triton tuning args such as waves_per_eu, kpack, matrix_instr_nonkdim. This PR also introduces behavior to allow tuning for GROUP_M in triton gemm case. Also brings in specific tuning for general ROCm gemm case.
Dynamo huggingface inference benchmarks (with TORCHINDUCTOR_MAX_AUTOTUNE=1, TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS="TRITON", --bfloat16)
> GEOMEAN speedup (before): | 1.36x
GEOMEAN speedup (after): | 1.42x
We are also seeing improvement ~9% on internal addmm benchmark
This will require a follow up to see if we can prune the new config list further.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,742,103,911
|
Add _foreach_clone ops
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"Stale",
"release notes: foreach_frontend"
] | 6
|
CONTRIBUTOR
|
Fixes #142181
Add `_foreach_clone` ops
**Test Result**
```bash
$ pytest test/test_foreach.py -k test_foreach_clone_tensors -v
```

cc @janeyx99
| true
|
2,742,070,472
|
`set_linter` suggests destructive changes on a new commit
|
rec
|
closed
|
[
"module: lint",
"triaged",
"bug"
] | 4
|
COLLABORATOR
|
### 🐛 Describe the bug
Reported by @Esquains and discussed [here](https://github.com/pytorch/pytorch/pull/138454#issuecomment-2543369337).
This string
```
print(f"{tempfile.gettempdir()}/memory_snapshot.pickle")
```
gets mistakenly translated into
```
print(f"OrderedSet([tempfile.gettempdir()}/memory_snapshot.pickle])")
```
Thousands of other existing f-strings work fine - our guess is that it's something to do with `lintrunner` passing only incremental updates to linters.
This will likely hit any commit changing `torch/_inductor` so it should be fixed immediately.
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (conda-forge gcc 12.3.0-7) 12.3.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.9.20 | packaged by conda-forge | (main, Sep 30 2024, 17:49:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2060
GPU 1: NVIDIA GeForce RTX 2060
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.7.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.3.2/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3970X 32-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3700.0000
CPU min MHz: 2200.0000
BogoMIPS: 7400.24
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[conda] cuda-cudart 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart-static 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-static_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cupti 12.4.127 he02047a_2 conda-forge
[conda] cuda-cupti-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-libraries-dev 12.4.1 ha770c72_1 conda-forge
[conda] cuda-nvrtc 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvrtc-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx-dev 12.4.127 ha770c72_2 conda-forge
[conda] cuda-opencl 12.4.127 he02047a_1 conda-forge
[conda] cuda-opencl-dev 12.4.127 he02047a_1 conda-forge
[conda] cudnn 9.3.0.75 h93bb076_0 conda-forge
[conda] libcublas 12.4.5.8 he02047a_2 conda-forge
[conda] libcublas-dev 12.4.5.8 he02047a_2 conda-forge
[conda] libcufft 11.2.1.3 he02047a_2 conda-forge
[conda] libcufft-dev 11.2.1.3 he02047a_2 conda-forge
[conda] libcurand 10.3.5.147 he02047a_2 conda-forge
[conda] libcurand-dev 10.3.5.147 he02047a_2 conda-forge
[conda] libcusolver 11.6.1.9 he02047a_2 conda-forge
[conda] libcusolver-dev 11.6.1.9 he02047a_2 conda-forge
[conda] libcusparse 12.3.1.170 he02047a_2 conda-forge
[conda] libcusparse-dev 12.3.1.170 he02047a_2 conda-forge
[conda] libmagma 2.8.0 h0af6554_0 conda-forge
[conda] libmagma_sparse 2.8.0 h0af6554_0 conda-forge
[conda] libnvjitlink 12.4.127 he02047a_2 conda-forge
[conda] libnvjitlink-dev 12.4.127 he02047a_2 conda-forge
[conda] magma 2.8.0 h51420fd_0 conda-forge
[conda] mkl 2024.2.2 ha957f24_15 conda-forge
[conda] mkl-include 2024.2.2 ha957f24_15 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 py39h74842e3_0 conda-forge
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
| true
|
2,741,992,301
|
[Triton commit bump] Upgrade nightly commit to include gfx950 target + LLVM bump
|
jataylo
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Brings in https://github.com/triton-lang/triton/pull/5417
| true
|
2,741,952,682
|
[foreach-map] Add tests for backward
|
mlazos
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Adds tests for unary and binary foreach_map w/ backwards
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,741,641,079
|
torch.linalg.qr is significantly slower on GPU compared to CPU and SVD for batched small matrices
|
h-skibbe
|
open
|
[
"module: cuda",
"triaged",
"module: linear algebra"
] | 5
|
NONE
|
### 🐛 Describe the bug
When performing QR decomposition on batched small matrices, torch.linalg.qr is significantly slower on the GPU compared to the CPU and even slower than torch.linalg.svd on the GPU. This behavior seems unexpected since QR decomposition is typically faster than SVD. Tested with pytorch 2.3 and pytorch 2.5 .
Steps to Reproduce:
```
import torch
import time
matrix = torch.randn(10, 30, 30, 30, 3, 3)
for device in ["cpu", "cuda"]:
start = time.time()
U, S, V = torch.linalg.svd(matrix.to(device))
print("SVD Time:", time.time() - start, "Device:", device)
start = time.time()
Q, R = torch.linalg.qr(matrix.to(device))
print("QR Time:", time.time() - start, "Device:", device)
```
Output on an Nvidia H100:
SVD Time: 0.5345005989074707 cpu
QR Time: 0.11194396018981934 cpu
SVD Time: 0.06591439247131348 cuda
QR Time: 8.115148305892944 cuda
Output on an Nvidia Rtx5000a:
SVD Time: 0.6414940357208252 cpu
QR Time: 0.11259078979492188 cpu
SVD Time: 0.8223276138305664 cuda
QR Time: 7.6162269115448 cuda
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 10:40:35) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 PCIe
GPU 1: NVIDIA H100 PCIe
GPU 2: NVIDIA H100 PCIe
GPU 3: NVIDIA H100 PCIe
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4799.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.8.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.26.4 py311h64a7726_0 conda-forge
[conda] numpydoc 1.8.0 pyhd8ed1ab_0 conda-forge
[conda] pytorch 2.5.1 py3.11_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py311_cu124 pytorch
[conda] torchtriton 3.1.0 py311 pytorch
[conda] torchvision 0.20.1 py311_cu124 pytorch
cc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,741,624,383
|
[CPU][Inductor] Diffusers model got NotImplementedError: SliceView on CPU
|
mengfei25
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
SliceView NotImplementedError on CPU
```python
# https://github.com/intel/ai-reference-models/blob/05dea0c0554aa1051cd622d06c959eb1dea74213/models_v2/pytorch/LCM/inference/cpu/inference.py
export TORCH_INDUCTOR=1
export TORCHINDUCTOR_FREEZING=1
python ai-reference-models/models_v2/pytorch/LCM/inference/cpu/inference.py --model_name_or_path=SimianLuo/LCM_Dreamshaper_v7 --dataset_path=/path/to/coco --benchmark -w 1 -i 3 --compile_inductor
```
Logs
```bash
torch.compile with inductor backend ...
Traceback (most recent call last):
File "/root/ai-reference-models/models_v2/pytorch/LCM/inference/cpu/inference.py", line 570, in <module>
main()
File "/root/ai-reference-models/models_v2/pytorch/LCM/inference/cpu/inference.py", line 288, in main
pipe.unet(*input)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2870, in run
super().run()
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1053, in run
while self.step():
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 963, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3050, in RETURN_VALUE
self._return(inst)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3035, in _return
self.output.compile_subgraph(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1136, in compile_subgraph
self.compile_and_call_fx_graph(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1594, in compile_fx
return compile_fx(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1880, in compile_fx
return aot_autograd(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1145, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
compiled_fn = dispatch_and_compile()
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 201, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1456, in fw_compiler_freezing
optimized_function = inner_compile(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 572, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 676, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile
graph.run(*example_inputs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/graph.py", line 859, in run
return super().run(*args)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1500, in run_node
result = super().run_node(n)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1085, in call_function
return target(*args, **kwargs)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/fx_passes/mkldnn_fusion.py", line 569, in fn
if not _can_be_inplace(other) or other.data.shape != list(
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/fx_passes/mkldnn_fusion.py", line 525, in _can_be_inplace
return _can_be_inplace(_other.data)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/fx_passes/mkldnn_fusion.py", line 525, in _can_be_inplace
return _can_be_inplace(_other.data)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/fx_passes/mkldnn_fusion.py", line 525, in _can_be_inplace
return _can_be_inplace(_other.data)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/fx_passes/mkldnn_fusion.py", line 529, in _can_be_inplace
or len(_other.get_inputs_that_alias_output()) > 0
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/_inductor/ir.py", line 608, in get_inputs_that_alias_output
raise NotImplementedError(type(self).__name__)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
NotImplementedError: SliceView
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241215+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.16.0-rc1-intel-next-00543-g5867b0a2a125-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 6
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 amx_tile flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.6.0.dev20241215+cpu
[pip3] torch-fidelity==0.3.0
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.22.0a0+9defef5
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 2.6.0.dev20241215+cpu pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchvision 0.22.0a0+9defef5 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,741,607,321
|
[Inductor] Fix _can_be_inplace function
|
jiayisunx
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143279
Summary:
Modify _can_be_inplace function: return False if `_other.data` is an instance of `ir.BaseView`.
Fix https://github.com/pytorch/pytorch/issues/143280.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,741,583,374
|
Update slow tests
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3
|
COLLABORATOR
|
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
| true
|
2,741,287,656
|
[fsdp2] mixed precision reduce dtype is clamped before lazy init
|
leonardo0lyj
|
closed
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 4
|
NONE
|
Hi Andrew @awgu 😊,
As a big fan of fsdp2, I found an potential issue in mixed precision in context of lazy init:
- ideally, fsdp2 allows user to change param dtype after initialization but before forward, so comes the lazy init of mixed precision's `param_dtype`
(https://github.com/pytorch/pytorch/blob/d745b2b5163e821585d0701ebaf16a38f4d57eab/torch/distributed/fsdp/_fully_shard/_fsdp_param.py#L418)
- however, the mixed precision's `reduce_dtype` is set in `post_init` of MixPrecision dataclass (before lazy init) and but still depends on the `param_dtype` (which is clamped during lazy init), thus causing logical complexity and potential bugs:
https://github.com/pytorch/pytorch/blob/d745b2b5163e821585d0701ebaf16a38f4d57eab/torch/distributed/fsdp/_fully_shard/_fsdp_api.py#L50
- no sure we can move clamp of `reduce_dtype` into lazy init as well, i.e., after clamping `param_dtype`, which simplifies logic and enhances debugability.
Thanks 🙏
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,741,269,179
|
Flex Attention Trainable Bias Bug on A6000
|
joydddd
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
`python test/inductor/test_flex_attention.py -k test_head_specific_gate_batch:2 `
on A6000 GPU commit `625b4ed`
```
======================================================================
FAIL: test_head_specific_gate_batch:2_head:4_seq_len:256_headdim:16_dtype:float32_mode_max-autotune-no-cudagraphs (__main__.TestLearnableBiases)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/joydong/pytorch/torch/testing/_internal/common_utils.py", line 3108, in wrapper
method(*args, **kwargs)
File "/home/joydong/pytorch/torch/testing/_internal/common_utils.py", line 557, in instantiated_test
test(self, **param_kwargs)
File "/home/joydong/pytorch/test/inductor/test_flex_attention.py", line 5000, in test_head_specific_gate
self._check_outputs_and_grads(
File "/home/joydong/pytorch/test/inductor/test_flex_attention.py", line 4594, in _check_outputs_and_grads
self._gold_check(eager, compiled, gold, name)
File "/home/joydong/pytorch/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/joydong/pytorch/test/inductor/test_flex_attention.py", line 4566, in _gold_check
self.assertLessEqual(
AssertionError: 0.022904716432094574 not less than or equal to 0.003473577984095755 :
Tensor: grad_query
Compiled error (0.02290472) exceeds reference error (0.00025730) * fudge_factor (13.5)
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestLearnableBiases.test_head_specific_gate_batch:2_head:4_seq_len:256_headdim:16_dtype:float32_mode_max-autotune-no-cudagraphs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
```
I cannot reproduce stably:
It fails on different test cases randomly, but it is always a max-autotune-no-cudagraphs case.
Running that test case alone does not trigger the error.
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+git082124a
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 2900.000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0a0+git082124a
[conda] mkl-include 2024.2.2 pypi_0 pypi
[conda] mkl-static 2024.2.2 pypi_0 pypi
[conda] numpy 1.26.0 pypi_0 pypi
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0a0+git082124a dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,741,265,763
|
init
|
pianpwk
|
closed
|
[
"Stale",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,740,999,512
|
torch._logging.set_logs kind of sucks for Jupyter notebooks
|
ezyang
|
open
|
[
"module: logging",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Problems:
1. You can't override TORCH_TRACE via... anything. Impossible to do.
2. It would be really helpful if the function took the string format that the envvar takes, that format is very convenient and compact!
3. all=INFO is extremely spammy, for some reason
cc @mlazos
### Versions
main
| true
|
2,740,989,791
|
remove allow-untyped-defs for utils/data/datapipes/dataframe/structures.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: dataloader",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143272
* __->__ #143273
| true
|
2,740,989,766
|
remove allow-untyped-defs for _inductor/codegen/rocm/rocm_template_buffer.py
|
bobrenjc93
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143272
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,740,989,447
|
remove allow-untyped-defs for distributed/rpc/_testing/__init__.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (rpc)",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143153
* #143273
* #143272
* __->__ #143271
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,740,936,298
|
[Flex Decoding] split_kv Schedule evening
|
joydddd
|
closed
|
[
"open source",
"Stale",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
`flex_decoding` divide the KV matrix along the sequence length dimension into multiple sub-sequences and assigns to different blocks to improve GPU occupancy and HBM bandwidth utilization.
`num_splits = num_SM / Bsz / Hq`. (each SM is assigned on subsequence for one head)
This assignment happens statically, namely before we know what the mask looks like.
For example, for 12 SPARSE_KV blocks assigned to 4 SMs,
block 0: [0, 1, 2]
block 1: [3, 4, 5]
block 2: [6, 7, 8]
block 3: [9, 10, 11]
However, if the second half of the KV cache is masked out (very common case), it becomes:
block 0: [0, 1, 2]
block 1: [3, 4, 5]
block 2: x
block 3: x
And only 2 SMs are actually doing work.
### New Block Schedule
This PR changes the block schedule to:
block 0: [0, 4, 8]
block 1: [1, 5, 9]
block 2: [2, 6, 10]
block 3: [3, 7, 11]
with the same mask,
block 0: [0, 4, x]
block 1: [1, 5, x]
block 2: [2, x, x]
block 3: [3, x, x]
we have a more even schedule for most contiguous masks.
# Performance
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,740,847,345
|
[ROCm] Improvements for vectorized elementwise kernels
|
jerrymannil
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"rocm",
"ciflow/unstable",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 35
|
CONTRIBUTOR
|
* Make io_size calculation as minimum of size of input and output size, rather than the summation of all sizes
* for e.g, for torch.add() on half dtypes (bfloat16/float16), calc_io_size() returns 6 causing elems_per_thread to be 4
* But elems_per_thread = 8 works better on half datypes for AMD gpus
* Enable *_load_dwordx4 ISA for 16-bit and 8-bit dtypes on AMD gpus by using vector size of 8 and 16 respectively
Co-author: @akadutta
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,740,768,393
|
[CD] Fix XPU linux CD whl test failure
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Follow https://github.com/pytorch/pytorch/pull/142482, refer the original fix PR https://github.com/pytorch/pytorch/pull/130742 and new issue in https://github.com/pytorch/pytorch/actions/runs/12323126436/job/34403681230
Works for https://github.com/pytorch/pytorch/issues/114850
| true
|
2,740,653,377
|
Excessive precision discrepancy in torch.abs for complex Tensors with different data types
|
rookieLiu2018
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
Using `torch.abs` on complex tensors with `dtype=torch.complex32` and `dtype=torch.complex64` leads to an excessively large discrepancy in results
``` python
import torch
complex_tensor = [100 + 150j, 200 + 250j]
x = torch.tensor(complex_tensor, dtype=torch.complex32)
y = torch.tensor(complex_tensor, dtype=torch.complex64)
result = torch.abs(x)
result2 = torch.abs(y)
print(result)
print(result2)
```
The output is:
```
tensor([180.2500, 320.2500], dtype=torch.float16)
tensor([180.2776, 320.1562])
```
### Versions
```
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4400.41
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.6.0.74
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvtx==0.2.10
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.1+cu121
[pip3] torchaudio==2.5.1+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu121
[conda] Could not collect
```
| true
|
2,740,254,369
|
[Inductor XPU] Support max-autotune on XPU and reuse the corresponding Inductor UT.
|
etaf
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ci-no-td"
] | 12
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143266
This PR aims to add the functionality support of max-autotune for XPU. The current triton templates and configurations are not well optimized for XPU, so the performance is not ready yet. Also the `mm_plus_mm` template have accuracy issues in some cases. We will address these issues in the next PRs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,740,203,300
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
2,740,174,371
|
Add support for CPU scalar in addcmul
|
EmmettBicker
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 9
|
CONTRIBUTOR
|
Step required for performance in #143122
Adds support for CPU scalar for tensor_2 in addcmul. For example:
```
import torch
a = torch.rand(2, 2, device="cuda")
b = torch.tensor(1e-3)
torch.add(a, b)
torch.addcmul(a, a, b) # used to fail, now works
```
| true
|
2,740,145,240
|
[Easy] Bump CUDA nightly version to 11.8 / 12.4 / 12.6 in nightly pull tool
|
XuehaiPan
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: devx"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143262
* #141282
* __->__ #143263
cc @ptrblck @msaroufim @eqy @ZainRizvi @kit1980 @huydhn @clee2000
| true
|
2,740,145,194
|
Set proper `LD_LIBRARY_PATH` on Linux in nightly venv in nightly pull tool
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"Stale",
"topic: not user facing",
"no-stale",
"module: devx"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143262
Before this change:
```console
$ make setup-env-cuda PYTHON="${HOMEBREW_PREFIX}/bin/python3.12"
$ source venv/bin/activate
$ python3 -c 'import torch'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/PanXuehai/Projects/pytorch/torch/__init__.py", line 379, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory
```
This PR adds `site-packages/nvidia/**/lib` to `LD_LIBRARY_PATH` in `venv/bin/activate` script to let NVIDIA PyPI packages can be loaded correctly.
See also:
- #141837
cc @ZainRizvi @kit1980 @huydhn @clee2000
| true
|
2,740,125,800
|
Add a warning when a tensor with requires_grad=True is converted to a scalar
|
joshdavham
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"release notes: python_frontend",
"topic: improvements",
"ci-no-td"
] | 49
|
CONTRIBUTOR
|
Fixes #143071
Operations performed on tensors with `requires_grad=True` such as
```python
import torch
x = torch.tensor(2.0, requires_grad=True)
y = x ** 3
```
and
```python
x = torch.tensor(2.0, requires_grad=True)
y = torch.pow(x,3)
```
are valid operations.
While an operation using `numpy` like
```python
import numpy as np
x = torch.tensor(2.0, requires_grad=True)
y = np.pow(x,3)
# > RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
```
leads to an error.
However, an operation that uses `math` like
```python
import math
x = torch.tensor(2.0, requires_grad=True)
y = math.pow(x,3)
```
does not cause an error, and `y` is no longer a tensor with a gradient!
This represents a [footgun](https://en.wiktionary.org/wiki/footgun#Noun) for some users, like myself when training small, custom, non-neural network models.
To prevent future undesired behavior, I added a warning when converting tensors with `requires_grad=True` to scalars. Now, when using `math.pow` on a `tensor`, we get a single warning with:
```python
x = torch.tensor(2.0, requires_grad=True)
y = math.pow(x,3)
# > UserWarning: Converting a tensor with requires_grad=True to a scalar may lead to unexpected behavior.
# Consider using tensor.detach() first.
```
Please let me know if you have any questions 👍
| true
|
2,740,017,317
|
Regression: `BlockMask__getitem__` returns a new BlockMask but forgets to change its shape on the Q dimension
|
w568w
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0
|
NONE
|
### 🐛 Describe the bug
## Problem
Before af883262509b80f13a08dd5184d7b9456da38173, slicing a BlockMask along the query dimension would shrink its length on that dimension (and unfortunately round up the KV dimension):
```python
from torch.nn.attention.flex_attention import create_block_mask
block_mask = create_block_mask(
lambda b, h, q_idx, kv_idx: q_idx >= kv_idx,
B=None,
H=None,
Q_LEN=1024,
KV_LEN=1025, # it would be rounded up to 1280 = 1024 + 256
device="cuda",
BLOCK_SIZE=256,
)
print(block_mask[:, :, :1].shape) # obtain the first block along the Q dimension
# Output: (1, 1, 256, 1280)
```
## Output
But now, it will keep the original shape and make the output worse:
```python
# Same code as above
print(block_mask[:, :, :1].shape)
# Output: (1, 1, 1024, 1025)
```
The `shape` matters, because the same commit added the strong validation against input QKV lengths with `shape`. If `shape` is wrong, the BlockMask will assert.
## Expected Output
`(1, 1, 256, 1025)`
## Additional Information
af883262509b80f13a08dd5184d7b9456da38173 introduced `seq_lengths` to store `(kv_indices.shape[-2] * BLOCK_SIZE[0], q_indices.shape[-2] * BLOCK_SIZE[1])`.
But in the `__getitem__` function, it passes the original `self.seq_lengths` to the newly created BlockMask without recalculating it:
https://github.com/pytorch/pytorch/blob/91bf2e16debdc41f5dde2bb5cc8e4f39f8955d4e/torch/nn/attention/flex_attention.py#L478
### Versions
```
PyTorch version: 2.6.0.dev20241214+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800 80GB PCIe
GPU 1: NVIDIA A800 80GB PCIe
GPU 2: NVIDIA A800 80GB PCIe
GPU 3: NVIDIA A800 80GB PCIe
GPU 4: NVIDIA A800 80GB PCIe
GPU 5: NVIDIA A800 80GB PCIe
GPU 6: NVIDIA A800 80GB PCIe
GPU 7: NVIDIA A800 80GB PCIe
Nvidia driver version: 535.183.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
Stepping: 6
CPU MHz: 2600.000
CPU max MHz: 3100.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
L1d cache: 2.6 MiB
L1i cache: 1.8 MiB
L2 cache: 70 MiB
L3 cache: 84 MiB
NUMA node0 CPU(s): 0-13,56-69
NUMA node1 CPU(s): 14-27,70-83
NUMA node2 CPU(s): 28-41,84-97
NUMA node3 CPU(s): 42-55,98-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] pynvjitlink-cu12==0.4.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241214+cu124
[pip3] torchvision==0.22.0.dev20241214+cu124
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] pynvjitlink-cu12 0.4.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241214+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241214+cu124 pypi_0 pypi
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,739,988,920
|
s.isIntegral(false) INTERNAL ASSERT FAILED
|
barbara42
|
open
|
[
"needs reproduction",
"module: autograd",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
When training ViT_b_16 (https://pytorch.org/vision/main/models/generated/torchvision.models.vit_b_16.html#torchvision.models.vit_b_16) on CUDA
```
model = helper.train_model(model, dataloaders, criterion, optimizer, scheduler,
File "/home/birdy/meng_thesis/code/master_ifcb_classifier/utils/resnet_experiment_helpers.py", line 298, in train_model
loss.backward()
File "/home/birdy/.conda/envs/cv-env/lib/python3.10/site-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/birdy/.conda/envs/cv-env/lib/python3.10/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: s.isIntegral(false) INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch-base_1664259963002/work/aten/src/ATen/ScalarOps.h":39, please report a bug to PyTorch.
```
Code defining model
```
model = models.vit_b_16(weights='IMAGENET1K_V1')
model.heads.head = nn.Linear(model.heads.head.in_features, NUM_CLASSES)
model = nn.DataParallel(model)
model= model.to(device)
```
Code for training function
```
def train_model(model, dataloaders, criterion, optimizer, scheduler, num_epochs=25, save_checkpoints = False, DEST='', model_name="ResNet", val_fqn=1, device="cpu"):
since = time.time()
# Create a temporary directory to save training checkpoints
with TemporaryDirectory() as tempdir:
#best_model_params_path = os.path.join(tempdir, 'best_model_params.pt')
best_model_params_path = f'{DEST}/{model_name}-best.pt'
torch.save(model.state_dict(), best_model_params_path)
best_acc = 0.0
# intialize training history if model doesn't already have it
if hasattr(model, 'history') == False:
model.history = {
'train_loss': [],
'train_acc': [],
'train_class_acc': [],
'train_epoch_duration': [],
'val_epochs': [],
'val_loss': [],
'val_acc': [],
'val_class_acc': [],
'val_epoch_duration': []
}
dt_string = datetime.now().strftime("%d-%m-%Y-%H-%M-%S")
for epoch in range(num_epochs):
print(f'Epoch {epoch}/{num_epochs - 1}')
print('-' * 10)
epoch_start_time = time.time()
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'val' and epoch % val_fqn !=0:
continue # only validate model every nth time
if phase == 'train':
model.train() # Set model to training mode
else:
model.history['val_epochs'].append(epoch)
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
running_class_acc = 0.0
batch_counter = 0
# Iterate over data.
for inputs, labels in tqdm(dataloaders[phase]):
batch_counter += 1
inputs = inputs.to(device)
labels = labels.to(device)
# labels = torch.tensor(labels).to(device)
# zero the parameter gradients
optimizer.zero_grad()
# print(inputs.shape)
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
avg_class_acc, _ = calculate_per_class_accuracy(labels.data.cpu().numpy(), preds.cpu().numpy())
running_class_acc += avg_class_acc
if phase == 'train':
if type(scheduler) == lr_scheduler.ReduceLROnPlateau:
scheduler.step(running_loss)
else:
scheduler.step()
epoch_loss = running_loss / len(dataloaders[phase].dataset)
epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset)
epoch_class_acc = running_class_acc / batch_counter
epoch_duration = time.time() - epoch_start_time
print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}')
model.history[f'{phase}_loss'].append(epoch_loss)
model.history[f'{phase}_acc'].append(epoch_acc.item())
model.history[f'{phase}_class_acc'].append(epoch_class_acc)
model.history[f'{phase}_epoch_duration'].append(epoch_duration)
if save_checkpoints:
# write over the old checkpoint - saves mem space
PATH = f"{DEST}/{model_name}-{dt_string}-checkpoint.pt"
save_checkpoint(model, optimizer, PATH, epoch)
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
torch.save(model.state_dict(), best_model_params_path)
print()
time_elapsed = time.time() - since
model.history['time_elapsed'] = time_elapsed
print(f'Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s')
print(f'Best val Acc: {best_acc:4f}')
# load best model weights
# model.load_state_dict(torch.load(best_model_params_path))
return model
```
### Versions
Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.3 (Ootpa) (ppc64le)
GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5)
Clang version: Could not collect
CMake version: version 3.11.4
Libc version: glibc-2.28
Python version: 3.10.13 (main, Sep 11 2023, 13:14:29) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-240.el8.ppc64le-ppc64le-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: ppc64le
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 4
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Model: 2.2 (pvr 004e 1202)
Model name: POWER9, altivec supported
CPU max MHz: 3800.0000
CPU min MHz: 2166.0000
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 10240K
NUMA node0 CPU(s): 0-63
NUMA node8 CPU(s): 64-127
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==1.12.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.13.1
[conda] _pytorch_select 2.0 cuda_2 https://opence.mit.edu
[conda] cudatoolkit 11.4.4 h5d40d8d_10 https://opence.mit.edu
[conda] cudnn 8.3.0_11.4 h5b181e6_1 https://opence.mit.edu
[conda] nccl 2.12.7 cuda11.4_1 https://opence.mit.edu
[conda] numpy 1.23.5 py310h87cc683_0
[conda] numpy-base 1.23.5 py310hac71eb6_0
[conda] pytorch-base 1.12.1 cuda11.4_py310_pb3.19_2 https://opence.mit.edu
[conda] torchprofile 0.0.4 pypi_0 pypi
[conda] torchvision 0.13.1 cuda11.4_py310_1 https://opence.mit.edu
[conda] torchvision-base 0.13.1 cuda11.4_py310_1 https://opence.mit.edu
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,739,869,869
|
Proper support for optionals in TorchScript
|
bluenote10
|
open
|
[
"oncall: jit"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
This came up as part of https://github.com/pytorch/pytorch/pull/142326.
TorchScript should support `Optional[T]` or `T | None` annotations correctly. Currently something basic like the following fails:
```py
import torch
class MyScriptModule(torch.nn.Module):
bias: torch.Tensor | None
def __init__(self, use_bias: bool = True):
super().__init__()
if use_bias:
self.bias = torch.nn.Parameter(torch.tensor(1.0))
else:
self.bias = None
def forward(self, input):
if self.bias is not None:
return input + self.bias
else:
return input
my_script_module = torch.jit.script(MyScriptModule())
```
The TorchScript compilation here fails, because TorchScript does not handle `bias: Optional[torch.Tensor]` correctly. Note that this is a fairly common pattern, so supporting it properly would be quite valuable.
What's even worse: The example can be compiled by using a **wrong** type annotation `bias: torch.Tensor` or omitting the type annotation, which implicitly also results in a **wrong** type annotation due to how `__getattr__` is typed (falsely promising non-optionality). Of course that is pretty bad, because we are forced to lie to the type checker, sweeping bugs under the carpet.
---
Possibly related to https://github.com/pytorch/pytorch/issues/75002
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,739,767,182
|
[4/N] Apply py39 ruff and pyupgrade fixes
|
cyyever
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend",
"suppress-bc-linter",
"ciflow/s390"
] | 14
|
COLLABORATOR
|
```torch/fx/passes/annotate_getitem_nodes.py``` was changed to support the new type hinting annotations.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,739,741,733
|
Remove all dead type ignores (round 2)
|
bluenote10
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"module: rocm",
"module: cpu",
"open source",
"module: amp (automated mixed precision)",
"Stale",
"release notes: quantization",
"release notes: distributed (c10d)",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"oncall: distributed checkpointing"
] | 2
|
CONTRIBUTOR
|
The next follow-up on #142325
This PR removes all dead/unused `# type: ignore` that do not have the code `# type: ignore[import]` (because these may be conditional type ignores, as discussed in https://github.com/pytorch/pytorch/pull/60006#issuecomment-2480604728).
Considering that the amount of dead type ignores is really huge, fixing them manually like in #142325 would take 10-20 PRs of that size, which would take a really long time to accomplish. Therefore I switched the strategy, and increasing the level of automation. What I did here is:
- I temporarily set `allowed_unused_ignore = False` in the `mypy.ini` to let mypy detect the currently dead type ignores.
- I used a script that takes the mypy output and modifies all affected places to remove the type ignores automatically.
- The script keeps all `# type: ignore[import]` because these are a likely candidate of a "conditional ignore" (i.e., one that may or may not fire depending on installed optional external dependencies).
- I have manually reviewed the ignores, and for some that also looked like they belong into the "conditional ignore" category, I have applied the envisioned `type: ignore[<error-code>, unused-ignore]` pattern.
In general I feel that we should not worry too much about getting each and every conditional ignore right. Even if we would accidentally remove a type ignore that is a conditional ignore under some weird condition (note that it is mainly a "partial environment" problem), it would be rather easy to re-introduce a few if needed. I would assume that this may be a very tiny number compared to the huge number of actually dead and harmful type ignores. In other words, there is probably more value in erring on the side of removing them more aggressively.
----
As a result of the huge amount of dead ignores the diff is pretty big. If desired I could break it down into sub-PRs e.g. by packages (suggestions welcome how to structure them best). On the other hand, if the assigned reviewers review their assigned packages anyway, the net result would perhaps be the same as with 1 PR anyway.
CC @ezyang
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @mingfeima @XiaobingSuper @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 @ColinPeppler @desertfire
| true
|
2,739,615,243
|
Remove unnecessary once flag usage
|
cyyever
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ciflow/s390"
] | 12
|
COLLABORATOR
|
Static variables in C++11 is guaranteed to be initialised exactly once, as mentioned [here](https://en.cppreference.com/w/cpp/language/storage_duration)
```
If multiple threads attempt to initialize the same static local variable concurrently,
the initialization occurs exactly once
(similar behavior can be obtained for arbitrary functions with std::call_once.
Usual implementations of this feature use variants
of the double-checked locking pattern,
which reduces runtime overhead for already-initialized local statics
to a single non-atomic boolean comparison.
```
Given that static c10::once_flag is used before, why not just use the associated function to initialised the related static variables? That is the motivation behind this PR.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,739,506,633
|
[TorchGen] Simplify argumenttype_type
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Simplify torchgen code.
| true
|
2,739,492,184
|
Introduce gc_time_us field for dynamo_compile scuba logging
|
qiurc
|
closed
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 14
|
CONTRIBUTOR
|
Summary: The newly introduced field will be used by the following diff D67062158 to record the garbage collection time during PT2 compilation
Test Plan:
This diff itself should be no-op.
Test together with D67062158. Please refer to the test plan in D67062158 for the detailed test plan and result.
Differential Revision: D67226982
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,739,480,517
|
Remove __ubsan_ignore_undefined__
|
cyyever
|
open
|
[
"module: cpu",
"triaged",
"open source",
"topic: not user facing"
] | 8
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,739,457,137
|
Simplify host_softmax
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,739,454,547
|
[PyTorch] Add backend aot_eager_decomp_partition_with_mode
|
silverlakeli
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 13
|
CONTRIBUTOR
|
Summary:
## Why
To make it possible to run torch dispatch mode inside compiled modules. This is to enable running MemoryTrackerMode (in next diff) to collect memory usage of compiled modules.
## What
Add a backend aot_eager_decomp_partition_with_mode.
Add an enable_log to the backend to control the compilation logging (which can be very verbose and slow the run of mode)
Test Plan:
unittest
E2e tested in the next diff which shows the memory read from the mode passed to this backend is very close to the actual job's memory snapshot.
Differential Revision: D67227144
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,739,424,688
|
torch.select could not guard on data-dependent expression error
|
ydwu4
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
File the issue for tracking.
I tried the following code:
```python
import torch
torch._dynamo.config.capture_scalar_outputs = True
def f(x, t):
c = x.item()
torch._check(c >= 0)
torch._check(c < t.size(0))
return torch.select(t, 0, c) + 1
out = torch.compile(f, fullgraph=True)(torch.tensor(3, dtype=torch.int64), torch.randn(4, 4))
print(out)
```
with error log:
```
I1213 17:00:11.181000 1661649 torch/fx/experimental/symbolic_shapes.py:3194] [0/0] create_env
I1213 17:00:11.191000 1661649 torch/fx/experimental/symbolic_shapes.py:4425] [0/0] create_symbol s0 = 3 for L['x'].item() [-int_oo, int_oo] c = x.item() # test.py:4 in f (_dynamo/variables/builder.py:2872 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0" or to suppress this message run with TORCHDYNAMO_EXTENDED_ADVICE="0"
V1213 17:00:11.206000 1661649 torch/fx/experimental/symbolic_shapes.py:5804] [0/0] _update_var_to_range s0 = VR[0, int_oo] (update)
I1213 17:00:11.206000 1661649 torch/fx/experimental/symbolic_shapes.py:6283] [0/0] eval s0 >= 0 [guard added] torch._check(c >= 0) # test.py:5 in f (_dynamo/utils.py:2587 in run_node), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="s0 >= 0"
V1213 17:00:11.232000 1661649 torch/fx/experimental/symbolic_shapes.py:5804] [0/0] _update_var_to_range s0 = VR[0, 3] (update)
I1213 17:00:11.233000 1661649 torch/fx/experimental/symbolic_shapes.py:6283] [0/0] eval s0 < 4 [guard added] torch._check(c < t.size(0)) # test.py:6 in f (_dynamo/utils.py:2587 in run_node), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="s0 < 4"
V1213 17:00:11.236000 1661649 torch/fx/experimental/symbolic_shapes.py:6414] [0/0] eval -s0 > 4 == False [statically known]
V1213 17:00:11.236000 1661649 torch/fx/experimental/symbolic_shapes.py:6414] [0/0] eval s0 >= 4 == False [statically known]
V1213 17:00:11.238000 1661649 torch/fx/experimental/symbolic_shapes.py:6414] [0/0] eval 4*s0 >= 0 == True [statically known]
V1213 17:00:11.240000 1661649 torch/fx/experimental/symbolic_shapes.py:6414] [0/0] eval 16*s0 + 16 > 64 == False [statically known]
I1213 17:00:12.644000 1661649 torch/fx/experimental/symbolic_shapes.py:4105] [0/0] create_unbacked_symint u0 [-int_oo, int_oo] (_subclasses/fake_impls.py:403 in local_scalar_dense)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] Data dependent variable 'u0' allocated at:
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/test.py", line 9, in <module>
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] out = torch.compile(f, fullgraph=True)(torch.tensor(3, dtype=torch.int64), torch.randn(4, 4))
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 573, in _fn
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return fn(*args, **kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 1380, in __call__
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return self._torchdynamo_orig_callable(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 547, in __call__
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return _compile(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 986, in _compile
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] guarded_code = compile_inner(code, one_graph, hooks, transform)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 715, in compile_inner
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return _compile_inner(code, one_graph, hooks, transform)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return function(*args, **kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] out_code = transform_code_object(code, transform)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] transformations(instructions, code_options)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 231, in _fn
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return fn(*args, **kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 662, in transform
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] tracer.run()
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 2866, in run
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] super().run()
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 1052, in run
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] while self.step():
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 962, in step
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self.dispatch_table[inst.opcode](self, inst)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 3046, in RETURN_VALUE
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self._return(inst)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 3031, in _return
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self.output.compile_subgraph(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self.compile_and_call_fx_graph(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn = self.call_user_compiler(gm)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return self._call_user_compiler(gm)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn = compiler_fn(gm, self.example_inputs())
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_gm = compiler_fn(gm, example_inputs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_gm = compiler_fn(gm, example_inputs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/__init__.py", line 2314, in __call__
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return compile_fx(model_, inputs_, config_patches=self.config)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_inductor/compile_fx.py", line 1859, in compile_fx
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return aot_autograd(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/backends/common.py", line 83, in __call__
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1145, in aot_module_simplified
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn = AOTAutogradCache.load(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn = dispatch_and_compile()
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn, _ = create_aot_dispatcher_function(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return _create_aot_dispatcher_function(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 681, in _create_aot_dispatcher_function
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] fw_metadata = run_functionalized_fw_and_collect_metadata(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] flat_f_outs = f(*flat_f_args)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] out = PropagateUnbackedSymInts(mod).run(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 167, in run
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self.env[node] = self.run_node(node)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6781, in run_node
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] result = super().run_node(n)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 230, in run_node
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return getattr(self, n.op)(n.target, args, kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 334, in call_method
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return getattr(self_obj, target)(*args_tail, **kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/functional_tensor.py", line 527, in __torch_dispatch__
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] outs_unwrapped = func._op_dk(
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/utils/_stats.py", line 21, in wrapper
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return fn(*args, **kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return self.dispatch(func, types, args, kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1817, in dispatch
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return self._cached_dispatch_impl(func, types, args, kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1387, in _cached_dispatch_impl
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] output = self._dispatch_impl(func, types, args, kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 2355, in _dispatch_impl
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] op_impl_out = op_impl(self, func, *args, **kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_impls.py", line 160, in dispatch_to_op_implementations_dict
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return op_implementations_dict[func](fake_mode, func, *args, **kwargs)
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_impls.py", line 403, in local_scalar_dense
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] r = fake_mode.shape_env.create_unbacked_symint()
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return retlog(fn(*args, **kwargs))
V1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:5729] [0/0]
W1213 17:00:12.649000 1661649 torch/fx/experimental/symbolic_shapes.py:6309] [0/0] failed during evaluate_expr(-u0 > 4, hint=None, size_oblivious=True, forcing_spec=False
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] failed while running evaluate_expr(*(-u0 > 4, None), **{'fx_node': False, 'size_oblivious': True})
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] Traceback (most recent call last):
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] return retlog(fn(*args, **kwargs))
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6305, in evaluate_expr
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] return self._evaluate_expr(
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6495, in _evaluate_expr
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] raise self._make_data_dependent_error(
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression -u0 > 4 (unhinted: -u0 > 4). (Size-like symbols: none)
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0]
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] Caused by: (_meta_registrations.py:4832 in meta_select)
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] For more information, run with TORCH_LOGS="dynamic"
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0]
E1213 17:00:12.650000 1661649 torch/fx/experimental/recording.py:299] [0/0] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] failed while attempting to run meta for aten.select.int
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] Traceback (most recent call last):
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 2385, in _dispatch_impl
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] r = func(*args, **kwargs)
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/_ops.py", line 722, in __call__
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] return self._op(*args, **kwargs)
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/_meta_registrations.py", line 4832, in meta_select
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] guard_size_oblivious(-index > size) or guard_size_oblivious(index >= size)
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 409, in guard_size_oblivious
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] return expr.node.guard_size_oblivious("", 0)
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/sym_node.py", line 564, in guard_size_oblivious
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] r = self.shape_env.evaluate_expr(
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] return retlog(fn(*args, **kwargs))
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6305, in evaluate_expr
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] return self._evaluate_expr(
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6495, in _evaluate_expr
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] raise self._make_data_dependent_error(
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression -u0 > 4 (unhinted: -u0 > 4). (Size-like symbols: none)
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0]
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] Caused by: (_meta_registrations.py:4832 in meta_select)
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] For more information, run with TORCH_LOGS="dynamic"
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0]
E1213 17:00:12.650000 1661649 torch/_subclasses/fake_tensor.py:2389] [0/0] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
Traceback (most recent call last):
File "/data/users/yidi/pytorch/test.py", line 9, in <module>
out = torch.compile(f, fullgraph=True)(torch.tensor(3, dtype=torch.int64), torch.randn(4, 4))
File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/data/users/yidi/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 2866, in run
super().run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 3046, in RETURN_VALUE
self._return(inst)
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 3031, in _return
self.output.compile_subgraph(
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/users/yidi/pytorch/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/data/users/yidi/pytorch/torch/_inductor/compile_fx.py", line 1859, in compile_fx
return aot_autograd(
File "/data/users/yidi/pytorch/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1145, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
compiled_fn = dispatch_and_compile()
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 681, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
flat_f_outs = f(*flat_f_args)
File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
out = PropagateUnbackedSymInts(mod).run(
File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6781, in run_node
result = super().run_node(n)
File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 310, in call_function
return target(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/functional_tensor.py", line 527, in __torch_dispatch__
outs_unwrapped = func._op_dk(
File "/data/users/yidi/pytorch/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1817, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1378, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 2385, in _dispatch_impl
r = func(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_ops.py", line 722, in __call__
return self._op(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_meta_registrations.py", line 4832, in meta_select
guard_size_oblivious(-index > size) or guard_size_oblivious(index >= size)
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 409, in guard_size_oblivious
return expr.node.guard_size_oblivious("", 0)
File "/data/users/yidi/pytorch/torch/fx/experimental/sym_node.py", line 564, in guard_size_oblivious
r = self.shape_env.evaluate_expr(
File "/data/users/yidi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6305, in evaluate_expr
return self._evaluate_expr(
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6495, in _evaluate_expr
raise self._make_data_dependent_error(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
GuardOnDataDependentSymNode: Could not guard on data-dependent expression -u0 > 4 (unhinted: -u0 > 4). (Size-like symbols: none)
Caused by: (_meta_registrations.py:4832 in meta_select)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
While executing %select : [num_users=1] = call_function[target=torch.select](args = (%l_t_, 0, %item), kwargs = {})
Original traceback:
File "/data/users/yidi/pytorch/test.py", line 7, in f
return torch.select(t, 0, c) + 1
```
and
```python
import torch
torch._dynamo.config.capture_scalar_outputs = True
def f(x, t):
c = x.item()
torch._check_is_size(c)
torch._check(c < t.size(0))
return torch.select(t, 0, c) + 1
out = torch.compile(f, fullgraph=True)(torch.tensor(3, dtype=torch.int64), torch.randn(4, 4))
print(out)
```
with error log
```
I1213 16:58:35.194000 1650887 torch/fx/experimental/symbolic_shapes.py:3194] [0/0] create_env
I1213 16:58:35.204000 1650887 torch/fx/experimental/symbolic_shapes.py:4425] [0/0] create_symbol s0 = 3 for L['x'].item() [-int_oo, int_oo] c = x.item() # test.py:4 in f (_dynamo/variables/builder.py:2872 in <lambda>), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0" or to suppress this message run with TORCHDYNAMO_EXTENDED_ADVICE="0"
V1213 16:58:35.218000 1650887 torch/fx/experimental/symbolic_shapes.py:5804] [0/0] _update_var_to_range s0 = VR[0, int_oo] (update)
I1213 16:58:35.219000 1650887 torch/fx/experimental/symbolic_shapes.py:6283] [0/0] eval s0 >= 0 [guard added] torch._check_is_size(c) # test.py:5 in f (_dynamo/utils.py:2587 in run_node), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="s0 >= 0"
V1213 16:58:35.243000 1650887 torch/fx/experimental/symbolic_shapes.py:6414] [0/0] eval -s0 > 4 == False [statically known]
V1213 16:58:35.246000 1650887 torch/fx/experimental/symbolic_shapes.py:5804] [0/0] _update_var_to_range s0 = VR[0, 3] (update)
I1213 16:58:35.247000 1650887 torch/fx/experimental/symbolic_shapes.py:6283] [0/0] eval s0 < 4 [guard added] return torch.select(t, 0, c) + 1 # test.py:6 in f (_meta_registrations.py:4832 in meta_select), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="s0 < 4"
V1213 16:58:35.249000 1650887 torch/fx/experimental/symbolic_shapes.py:6414] [0/0] eval 4*s0 >= 0 == True [statically known]
V1213 16:58:35.251000 1650887 torch/fx/experimental/symbolic_shapes.py:6414] [0/0] eval 16*s0 + 16 > 64 == False [statically known]
I1213 16:58:36.363000 1650887 torch/fx/experimental/symbolic_shapes.py:4105] [0/0] create_unbacked_symint u0 [-int_oo, int_oo] (_subclasses/fake_impls.py:403 in local_scalar_dense)
V1213 16:58:36.364000 1650887 torch/fx/experimental/symbolic_shapes.py:5804] [0/0] _update_var_to_range u0 = VR[0, int_oo] (update)
I1213 16:58:36.365000 1650887 torch/fx/experimental/symbolic_shapes.py:6283] [0/0] runtime_assert u0 >= 0 [guard added] (_functorch/_aot_autograd/traced_function_transforms.py:875 in functional_call), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="u0 >= 0"
V1213 16:58:36.367000 1650887 torch/fx/experimental/symbolic_shapes.py:6414] [0/0] eval -u0 > 4 == False [statically known]
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] Data dependent variable 'u0' allocated at:
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/test.py", line 8, in <module>
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] out = torch.compile(f, fullgraph=True)(torch.tensor(3, dtype=torch.int64), torch.randn(4, 4))
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 573, in _fn
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return fn(*args, **kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 1380, in __call__
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return self._torchdynamo_orig_callable(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 547, in __call__
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return _compile(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 986, in _compile
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] guarded_code = compile_inner(code, one_graph, hooks, transform)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 715, in compile_inner
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return _compile_inner(code, one_graph, hooks, transform)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return function(*args, **kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] out_code = transform_code_object(code, transform)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] transformations(instructions, code_options)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 231, in _fn
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return fn(*args, **kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 662, in transform
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] tracer.run()
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 2866, in run
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] super().run()
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 1052, in run
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] while self.step():
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 962, in step
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self.dispatch_table[inst.opcode](self, inst)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 3046, in RETURN_VALUE
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self._return(inst)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 3031, in _return
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self.output.compile_subgraph(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self.compile_and_call_fx_graph(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn = self.call_user_compiler(gm)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return self._call_user_compiler(gm)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn = compiler_fn(gm, self.example_inputs())
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_gm = compiler_fn(gm, example_inputs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_gm = compiler_fn(gm, example_inputs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/__init__.py", line 2314, in __call__
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return compile_fx(model_, inputs_, config_patches=self.config)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_inductor/compile_fx.py", line 1859, in compile_fx
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return aot_autograd(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_dynamo/backends/common.py", line 83, in __call__
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1145, in aot_module_simplified
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn = AOTAutogradCache.load(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn = dispatch_and_compile()
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] compiled_fn, _ = create_aot_dispatcher_function(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return _create_aot_dispatcher_function(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 681, in _create_aot_dispatcher_function
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] fw_metadata = run_functionalized_fw_and_collect_metadata(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] flat_f_outs = f(*flat_f_args)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] out = PropagateUnbackedSymInts(mod).run(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 167, in run
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] self.env[node] = self.run_node(node)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6781, in run_node
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] result = super().run_node(n)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 230, in run_node
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return getattr(self, n.op)(n.target, args, kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 334, in call_method
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return getattr(self_obj, target)(*args_tail, **kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/functional_tensor.py", line 527, in __torch_dispatch__
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] outs_unwrapped = func._op_dk(
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/utils/_stats.py", line 21, in wrapper
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return fn(*args, **kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return self.dispatch(func, types, args, kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1817, in dispatch
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return self._cached_dispatch_impl(func, types, args, kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1387, in _cached_dispatch_impl
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] output = self._dispatch_impl(func, types, args, kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 2355, in _dispatch_impl
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] op_impl_out = op_impl(self, func, *args, **kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_impls.py", line 160, in dispatch_to_op_implementations_dict
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return op_implementations_dict[func](fake_mode, func, *args, **kwargs)
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_impls.py", line 403, in local_scalar_dense
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] r = fake_mode.shape_env.create_unbacked_symint()
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0] return retlog(fn(*args, **kwargs))
V1213 16:58:36.371000 1650887 torch/fx/experimental/symbolic_shapes.py:5729] [0/0]
W1213 16:58:36.372000 1650887 torch/fx/experimental/symbolic_shapes.py:6309] [0/0] failed during evaluate_expr(u0 >= 4, hint=None, size_oblivious=True, forcing_spec=False
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] failed while running evaluate_expr(*(u0 >= 4, None), **{'fx_node': False, 'size_oblivious': True})
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] Traceback (most recent call last):
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] return retlog(fn(*args, **kwargs))
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6305, in evaluate_expr
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] return self._evaluate_expr(
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6495, in _evaluate_expr
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] raise self._make_data_dependent_error(
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression u0 >= 4 (unhinted: u0 >= 4). (Size-like symbols: u0)
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0]
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] Caused by: (_meta_registrations.py:4832 in meta_select)
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] For more information, run with TORCH_LOGS="dynamic"
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0]
E1213 16:58:36.372000 1650887 torch/fx/experimental/recording.py:299] [0/0] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] failed while attempting to run meta for aten.select.int
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] Traceback (most recent call last):
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 2385, in _dispatch_impl
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] r = func(*args, **kwargs)
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/_ops.py", line 722, in __call__
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] return self._op(*args, **kwargs)
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/_meta_registrations.py", line 4832, in meta_select
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] guard_size_oblivious(-index > size) or guard_size_oblivious(index >= size)
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 409, in guard_size_oblivious
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] return expr.node.guard_size_oblivious("", 0)
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/sym_node.py", line 564, in guard_size_oblivious
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] r = self.shape_env.evaluate_expr(
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] return retlog(fn(*args, **kwargs))
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6305, in evaluate_expr
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] return self._evaluate_expr(
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6495, in _evaluate_expr
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] raise self._make_data_dependent_error(
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode: Could not guard on data-dependent expression u0 >= 4 (unhinted: u0 >= 4). (Size-like symbols: u0)
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0]
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] Caused by: (_meta_registrations.py:4832 in meta_select)
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] For more information, run with TORCH_LOGS="dynamic"
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0]
E1213 16:58:36.372000 1650887 torch/_subclasses/fake_tensor.py:2389] [0/0] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
Traceback (most recent call last):
File "/data/users/yidi/pytorch/test.py", line 8, in <module>
out = torch.compile(f, fullgraph=True)(torch.tensor(3, dtype=torch.int64), torch.randn(4, 4))
File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/data/users/yidi/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 2866, in run
super().run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 3046, in RETURN_VALUE
self._return(inst)
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 3031, in _return
self.output.compile_subgraph(
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/users/yidi/pytorch/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/data/users/yidi/pytorch/torch/_inductor/compile_fx.py", line 1859, in compile_fx
return aot_autograd(
File "/data/users/yidi/pytorch/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1145, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 754, in load
compiled_fn = dispatch_and_compile()
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 681, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
flat_f_outs = f(*flat_f_args)
File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
out = PropagateUnbackedSymInts(mod).run(
File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6781, in run_node
result = super().run_node(n)
File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 310, in call_function
return target(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/functional_tensor.py", line 527, in __torch_dispatch__
outs_unwrapped = func._op_dk(
File "/data/users/yidi/pytorch/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1817, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1378, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 2385, in _dispatch_impl
r = func(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_ops.py", line 722, in __call__
return self._op(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_meta_registrations.py", line 4832, in meta_select
guard_size_oblivious(-index > size) or guard_size_oblivious(index >= size)
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 409, in guard_size_oblivious
return expr.node.guard_size_oblivious("", 0)
File "/data/users/yidi/pytorch/torch/fx/experimental/sym_node.py", line 564, in guard_size_oblivious
r = self.shape_env.evaluate_expr(
File "/data/users/yidi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6305, in evaluate_expr
return self._evaluate_expr(
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6495, in _evaluate_expr
raise self._make_data_dependent_error(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
GuardOnDataDependentSymNode: Could not guard on data-dependent expression u0 >= 4 (unhinted: u0 >= 4). (Size-like symbols: u0)
Caused by: (_meta_registrations.py:4832 in meta_select)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
While executing %select : [num_users=1] = call_function[target=torch.select](args = (%l_t_, 0, %item), kwargs = {})
Original traceback:
File "/data/users/yidi/pytorch/test.py", line 6, in f
return torch.select(t, 0, c) + 1
```
### Versions
On main.
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,739,411,458
|
try root fix for FP8 tensor
|
mayank31398
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)"
] | 7
|
CONTRIBUTOR
|
Fixes #143194
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,739,408,997
|
[ca] re-enable disabled tests
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143247
FIXES https://github.com/pytorch/pytorch/issues/133197
The unspecified floats PR landed while this test was disabled, and it added an analysis restart which counts towards the backend call counter the test is using
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,739,403,209
|
UNSTABLE slow / linux-focal-rocm6.2-py3.10 / test (slow)
|
huydhn
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 4
|
CONTRIBUTOR
|
Network issue on ROCM runners is causing all the download there to fail
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,739,401,309
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
2,739,400,520
|
Exclude py 31.3t triton package from PyTorch 3.13t wheel
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Follow up after https://github.com/pytorch/pytorch/pull/143162
Include triton only for 3.13 packages not 3.13t
| true
|
2,739,400,420
|
[CD] Test torch.compile on 3.13
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Follow up after https://github.com/pytorch/pytorch/pull/143162
| true
|
2,739,397,070
|
ROCm SDPA: Ensure attn_mask has the same dtype with q
|
xinyazhang
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 17
|
COLLABORATOR
|
This is required by current AOTriton's backend.
Fixes NaN when calling SDPA ME backend with `q.dtype() != attn_mask.dtype()` when training llama2 using transformers+deepspeed+pytorch
Corresponding CUDA check seems to be here:
https://github.com/pytorch/pytorch/blob/708ce3c0082d670d9eaff84bc3c43cad4554a75d/aten/src/ATen/native/transformers/cuda/attention.cu#L1331-L1336
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,739,381,740
|
[DSD][BE] Rewrite some tests to remove `with_comms`
|
fegin
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143241
* #143240
Summary:
This saves ~ 1 minute test time.
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,739,381,694
|
[BE][CP] Use run_subtests instead of parametrize
|
fegin
|
closed
|
[
"oncall: distributed",
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143241
* __->__ #143240
Summary:
This provides a 15X increase in test performance speed.
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,739,381,175
|
xpu: torch.nn.DataParallel fails on multi-XPU environment with "module 'torch._C' has no attribute '_scatter'"
|
dvrogozh
|
open
|
[
"oncall: distributed",
"triaged",
"module: xpu"
] | 8
|
CONTRIBUTOR
|
With:
* Nightly PyTorch XPU:
* torch `2.6.0.dev20241209+xpu`
* torchaudio `2.5.0.dev20241209+xpu`
* torchvision `0.20.0.dev20241209+xpu`
* https://github.com/huggingface/transformers/commit/add53e25ffa3d1750a944086d2fbb016aee35406
`torch.nn.DataParallel` fails on multi-XPU environment with: `"AttributeError: module 'torch._C' has no attribute '_scatter'"`. This can be reproduced on Huggingface Transformers tests. One of the tests:
* https://github.com/huggingface/transformers/blob/add53e25ffa3d1750a944086d2fbb016aee35406/tests/models/vits/test_modeling_vits.py#L195
```
$ cat spec.py
import torch
DEVICE_NAME = 'xpu'
MANUAL_SEED_FN = torch.xpu.manual_seed
EMPTY_CACHE_FN = torch.xpu.empty_cache
DEVICE_COUNT_FN = torch.xpu.device_count
$ TRANSFORMERS_TEST_DEVICE_SPEC=spec.py python3 -m pytest tests/models/ -k test_multi_gpu_data_parallel_forward
<...>
tensor = _handle_complex(tensor)
if out is None:
devices = [_get_device_index(d) for d in devices]
> return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams))
E AttributeError: module 'torch._C' has no attribute '_scatter'
../../miniforge3/envs/xpu-nightly/lib/python3.12/site-packages/torch/nn/parallel/comm.py:205: AttributeError
------------------------------------------------------ Captured stdout call -------------------------------------------------------
pixel_values torch.Size([64, 3, 30, 30])
===================================================== short test summary info =====================================================
FAILED tests/models/falcon_mamba/test_modeling_falcon_mamba.py::FalconMambaModelTest::test_multi_gpu_data_parallel_forward - AttributeError: module 'torch._C' has no attribute '_scatter'
FAILED tests/models/mamba/test_modeling_mamba.py::MambaModelTest::test_multi_gpu_data_parallel_forward - AttributeError: module 'torch._C' has no attribute '_scatter'
FAILED tests/models/splinter/test_modeling_splinter.py::SplinterModelTest::test_multi_gpu_data_parallel_forward - AttributeError: module 'torch._C' has no attribute '_scatter'
FAILED tests/models/vits/test_modeling_vits.py::VitsModelTest::test_multi_gpu_data_parallel_forward - AttributeError: module 'torch._C' has no attribute '_scatter'
FAILED tests/models/x_clip/test_modeling_x_clip.py::
Here we also overwrite some of the tests of test_modeling_common.py, as X-CLIP does not use input_ids, inputs_embeds,
attention_mask and seq_length.
::test_multi_gpu_data_parallel_forward - AttributeError: module 'torch._C' has no attribute '_scatter'
======================================== 5 failed, 346 skipped, 77245 deselected in 10.52s ========================================
```
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,739,377,222
|
[torch][cuda] fix race condition in cuda initialization
|
suo
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 4
|
MEMBER
|
The access to lazy init callbacks (`_lazy_seed_tracker` and `_queued_calls`) is not synchronized with the initialization lock.
This exposes us to the following race:
1. start `_lazy_init`
2. take `_initialization_lock`
3. flush `_queued_calls` and run them all
4. another thread comes in and uses `_lazy_call` to put something on the queue (in our case, the `manual_seed`)
5. original thread finishes initializing, but never runs that call
| true
|
2,739,376,533
|
[EZ] Remove `--pre` from numpy installation command
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
| null | true
|
2,739,376,152
|
[AOTI] Relax input alignment assertion
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143236
Summary: https://github.com/pytorch/pytorch/pull/142136 added a runtime alignment assertion. But the assumption is probably too strict for more flexible use cases of AOTI, e.g. python deployment, see a recent error torchchat ran into for more details, https://github.com/pytorch/torchchat/actions/runs/12322072267/job/34394851280 . This PR relaxes the runtime check and implements copy_misaligned_inputs in cpp instead.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov
Differential Revision: [D67287922](https://our.internmc.facebook.com/intern/diff/D67287922)
| true
|
2,739,374,550
|
[Utilization Log] Concurrently collect aggregate data during the output interval
|
yangw-dev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
# overview
Add worker to collect metrics in short intervals
1.Worker: Add a worker to collect usage metrics, by default, every 500ms, notice this is configurable
2.Calculate & avg and max as data point, by default, every 5 second.
# Other
clean up the log format for necessary needs, currentl we do not need to track gpu processesors etc, or all pids from psutil
| true
|
2,739,370,026
|
[CI/CD] Build torch with numpy 2 and compatible scipy & numba versions
|
haifeng-jin
|
closed
|
[
"open source",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
This is a follow-up for https://github.com/pytorch/pytorch/pull/141925.
The installed version of SciPy and Numba were not compatible with numpy 2.0.2 while building.
This PR specifies compatible versions of SciPy and Numba to install.
| true
|
2,739,351,039
|
Network outage on ROCm runners
|
huydhn
|
closed
|
[
"high priority",
"triage review",
"module: rocm",
"ci: sev"
] | 2
|
CONTRIBUTOR
|
## Current Status
Ongoing
## Mitigation
ROCm jobs have been marked as unstable for the time being:
* https://github.com/pytorch/pytorch/issues/143232
* https://github.com/pytorch/pytorch/issues/143231
* https://github.com/pytorch/pytorch/issues/143230
* https://github.com/pytorch/pytorch/issues/143246
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,739,349,517
|
UNSTABLE inductor-rocm / rocm6.2-py3.10-inductor / test
|
huydhn
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 4
|
CONTRIBUTOR
|
Network issue on ROCM runners is causing all the download there to fail
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,739,349,135
|
UNSTABLE rocm / linux-focal-rocm6.2-py3.10 / test
|
huydhn
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 4
|
CONTRIBUTOR
|
Network issue on ROCM runners is causing all the download there to fail
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,739,348,658
|
UNSTABLE trunk / linux-focal-rocm6.2-py3.10 / test
|
huydhn
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 4
|
CONTRIBUTOR
|
Network issue on ROCM runners is causing all the download there to fail
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,739,345,586
|
Add typechecking indirection for Config
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143152
* __->__ #143229
When we create a Config[T], we actually dynamically unbox this in the module, so lets have type checker believe that Config[T] creates a T. This enables proper typechecking support.
| true
|
2,739,336,881
|
Remove deprecated branch after capture_pre_autograd_graph fully migrate to training IR
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"ciflow/inductor",
"release notes: export"
] | 8
|
CONTRIBUTOR
|
Summary:
as title
#buildall
Test Plan: CI
Differential Revision: D67222286
| true
|
2,739,333,888
|
[export] Unify single and multiple return for hops
|
yiming0416
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 35
|
CONTRIBUTOR
|
Summary: Introduce `is_hop_single_tensor_return` field to the `Node` class in serialization so that during deserialization when there is a single return, we know whether it is a tuple of a single element or a single element.
Test Plan:
```
buck2 run @mode/dev-nosan sigmoid/inference/test:e2e_test_cpu -- -r E2ETestCPUCond
buck2 run @mode/dev-nosan sigmoid/inference/test:test_passes -- -r test_const_folding2
```
Differential Revision: D66991624
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,739,333,556
|
Expose remaining sharedMem cudaDeviceProps to python
|
peterbell10
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: new features"
] | 3
|
COLLABORATOR
|
Was a bit too fast with my earlier PR, `sharedMemPerMultiprocessor` includes some memory that is reserved for the system. The amount a kernel can actually use is limited by `sharedMemPerBlockOptin`.
I also expose `sharedMemPerBlock` for completeness.
| true
|
2,739,330,194
|
No actual change, just remove variable contain Tensors from global scope
|
albanD
|
closed
|
[
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"skip-pr-sanity-checks",
"release notes: AO frontend"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143204
* #143323
* __->__ #143225
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,739,301,510
|
Kill capture_pre_autograd_graph API
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: quantization",
"ciflow/inductor",
"release notes: export",
"ci-no-td"
] | 160
|
CONTRIBUTOR
|
Summary:
Delete the following API:
- capture_pre_autograd_graph()
- capture_pre_autograd_graph_using_training_ir()
- gm_using_training_ir()
There's no more call sites to `capture_pre_autograd_graph`.
Except
1) two test cases in coreml, PR to remove: https://github.com/apple/coremltools/pull/2400
2) XLA: one test case in pytorch/xla, PR to remove: https://github.com/pytorch/xla/pull/8398
3) a few call sites guarded by version guard (< 2.5.0)
Test Plan: CI
Reviewed By: tugsbayasgalan
Differential Revision: D64056353
| true
|
2,739,269,358
|
cpp_wrapper: Use runtime dispatched fallbacks for complex ops
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144124
* #144123
* #144002
* #143909
* #143421
* __->__ #143223
* #141371
When calling a fallback op in cpp_wrapper mode, where any of the inputs are complex numbers, utilize the runtime dispatched fallback mode. This properly handles the Conjugate and Negative dispatch keys, if present, in exchange for a performance pessimization in complex arithmetic.
This PR additionally fixes some cascading failure modes exposed in our `aot_inductor` tests by this change.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,739,253,649
|
torch.onnx.export fails with <class 'torch._dynamo.exc.UserError'>: Could not guard on data-dependent expression u1 < 0 (unhinted: u1 < 0). (Size-like symbols: none)
|
liqunfu
|
open
|
[
"module: onnx",
"triaged",
"actionable"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
```python
import torch
from detectron2.structures import ImageList
batched_inputs = [{"image": torch.randint(0, 256, (3, 1024, 1024), dtype=torch.uint8), "height": 1024, "width": 1024}]
class test_model(torch.nn.Module):
def __init__(self):
super(test_model, self).__init__()
def forward(self, batched_inputs):
images = [x["image"] for x in batched_inputs]
images = ImageList.from_tensors(images, 32)
return images
test_model_ = test_model()
torch.onnx.export(test_model_, (batched_inputs,), 'test_model.onnx', dynamo=True, verbose=True)
```
Failure:
orch.onnx._internal.exporter._errors.TorchExportError: Failed to export the model with torch.export. This is step 1/2 of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and summit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the *torch.export* component and attach the full error stack as well as reproduction scripts.
## Exception summary
<class 'torch._dynamo.exc.UserError'>: Could not guard on data-dependent expression u1 < 0 (unhinted: u1 < 0). (Size-like symbols: none)
Potential framework code culprit (scroll up for full backtrace):
File "/home/liqfu/.conda/envs/BiomedParse/lib/python3.11/site-packages/torch/_refs/__init__.py", line 2882, in constant_pad_nd
if pad[pad_idx + 1] < 0:
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u1"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "/home/liqfu/LiqunWA/BiomedParse/inference_utils/inference.py", line 10, in forward
images = ImageList.from_tensors(images, 32)
File "/home/liqfu/LiqunWA/detectron2/detectron2/structures/image_list.py", line 115, in from_tensors
batched_imgs = F.pad(tensors[0], padding_size, value=pad_value).unsqueeze_(0)
File "/home/liqfu/.conda/envs/BiomedParse/lib/python3.11/site-packages/torch/nn/functional.py", line 5096, in pad
return torch._C._nn.pad(input, pad, mode, value)
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example
from user code:
File "/home/liqfu/LiqunWA/BiomedParse/inference_utils/inference.py", line 10, in forward
images = ImageList.from_tensors(images, 32)
File "/home/liqfu/LiqunWA/detectron2/detectron2/structures/image_list.py", line 115, in from_tensors
batched_imgs = F.pad(tensors[0], padding_size, value=pad_value).unsqueeze_(0)
File "/home/liqfu/.conda/envs/BiomedParse/lib/python3.11/site-packages/torch/nn/functional.py", line 5096, in pad
return torch._C._nn.pad(input, pad, mode, value)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: CBL-Mariner 2.0.20241208 (x86_64)
GCC version: (GCC) 11.2.0
Clang version: Could not collect
CMake version: version 3.31.0
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.167.1-2.cm2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480C
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 avx512vbmi umip waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm serialize avx512_fp16 arch_capabilities
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20241208
[pip3] open_clip_torch==2.26.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.11 py311h5eee18b_0
[conda] mkl_random 1.2.8 py311ha02d727_0
[conda] numpy 1.26.4 py311h08b1b3b_0
[conda] numpy-base 1.26.4 py311hf175353_0
[conda] open-clip-torch 2.26.1 pypi_0 pypi
[conda] pytorch 2.5.1 py3.11_cuda11.8_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py311_cu118 pytorch
[conda] torchtriton 3.1.0 py311 pytorch
[conda] torchvision 0.20.1 py311_cu118 pytorch
| true
|
2,739,242,308
|
This should fail
|
malfet
|
closed
|
[
"module: cpu",
"ciflow/linux-aarch64"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,739,234,362
|
[logging] Log cudagraphify timings to dynamo_timed
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143220
Summary: this adds some new dynamo_timed calls in cudagraph_trees, primarily with the aim to add cudagraph-related timing to scuba. Things to note:
* Uses the changes in https://github.com/pytorch/pytorch/pull/141919 to log "runtime" entries
* The logging for chromium/tlparse/scuba relies on us providing a compile_id since it's not available in the environment. A lot of the changes here are just passing around the compile_id
* I believe the spirit of the scuba logging is to capture the overheads of `torch.compile`. Therefore, I'm not adding _every_ dynamo_timed to scuba. For example, "run_eager" is the first real execution of the inductor graph -- it's not cudagraph overhead, per se. Watch out for the two instances of `dynamo_compile_runtime_column_us="runtime_cudagraphify_time_us"`. Those are the spots I believe are _extra_ overhead we'd contribute to torch.compile.
Test Plan:
`python benchmarks/dynamo/torchbench.py --performance --training --amp --backend inductor --device cuda --print-compilation-time --repeat 5 --cold-start-latency --only dcgan`:
* tlparse: https://fburl.com/21yrdn8h
* scuba: https://fburl.com/scuba/dynamo_compile/sandbox/wt90wnjz
`python benchmarks/dynamo/torchbench.py --performance --training --amp --backend inductor --device cuda --print-compilation-time --repeat 5 --cold-start-latency --only nanogpt`
* tlparse: https://fburl.com/r9mp7uiv
* scuba: https://fburl.com/scuba/dynamo_compile/sandbox/1nvx94re
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,739,229,442
|
ROCM 6.2.4 RuntimeError: HIP error: AMD_SERIALIZE_KERNEL=3 Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
|
KEDI103
|
closed
|
[
"module: rocm",
"triaged"
] | 8
|
NONE
|
### 🐛 Describe the bug
Before release of 6.2.4 to main page I tried it with torch: 2.6.0.dev20241209+rocm6.2.4 it working perfect but after torch: 2.6.0.dev20241213+rocm6.2.4 released this error appeared
```
HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.
```
@jeffdaily @hongxiayang
I am using radeon VII gfx_906 without pci atomics supported platform.
Instaled version of Rocm: 6.3.0 lastest

You can check before:
https://github.com/pytorch/pytorch/issues/103973
### Versions
torch: 2.6.0.dev20241213+rocm6.2.4
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,739,213,441
|
Exclude py 31.3t triton package from PyTorch 3.13t wheel
|
atalman
|
closed
|
[
"Merged",
"Reverted",
"ciflow/binaries",
"topic: not user facing",
"ci-no-td"
] | 11
|
CONTRIBUTOR
|
Follow up after https://github.com/pytorch/pytorch/pull/143162
Include triton only for 3.13 packages not 3.13t
| true
|
2,739,181,315
|
support slicing with symints in non-strict
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143217
Differential Revision: [D67215745](https://our.internmc.facebook.com/intern/diff/D67215745/)
| true
|
2,739,171,144
|
`torch._refs.tensor` does not accept `[]`
|
avikchaudhuri
|
closed
|
[
"triaged",
"actionable",
"module: primTorch",
"module: decompositions"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
torch._refs.tensor([]) # error
torch.tensor([]) # OK
```
### Versions
trunk
cc @ezyang @mruberry @SherlockNoMad
| true
|
2,739,143,375
|
Runners, torchbench, & the future
|
janeyx99
|
open
|
[
"module: ci",
"triaged"
] | 13
|
CONTRIBUTOR
|
The purpose of this issue is to centralize discussions regarding the state of our runners and torchbench, in particular what should be expected when they go through transitions. It is a bit of a weird issue as this does not point to any codebase problems with pytorch/pytorch, but the intended discussion group spans beyond pytorch/benchmark so I'm putting the issue here.
### Context
I recently wanted to send someone my optimizers unidash to discover that it was empty, oh no! I traced the problem back to the GHA in pytorch/benchmark that runs the benchmark (and writes the stats) to discover that it's been red again, for weeks, because the a100-runner runner it ran on had been removed: https://github.com/pytorch/benchmark/actions/workflows/userbenchmark-regression-detector.yml
I fixed it with @xuzhao9's help https://github.com/pytorch/benchmark/pull/2557 but two things picked at me:
(1) Not just my workflow, but all the userbenchmark workflows + anything else using A100 were not migrated, and were just failing: https://github.com/pytorch/benchmark/actions/workflows/userbenchmark-a100.yml and https://github.com/pytorch/benchmark/actions/workflows/v3-nightly.yml. Was I the first to notice because I'm the only one to care? Are there others who care but haven't been notified? We should migrate the workflows that do matter and delete the code for the ones that don't.
(2) It would have been nice to have a heads up. I believe there was a migration plan (cc @seemethere @malfet @pytorch/pytorch-dev-infra @jeanschmidt @atalman @xuzhao9) as I saw some budding draft PRs, though I would have expected for all usages of the runner to be migrated before the runner was removed completely. Is this incident a fluke or should I shift expectations?
### Questions to answer
(1) Across the repos in the org (not just pytorch/pytorch), what is the expected transition plan for when a runner will no longer be running anymore? cc @seemethere
(2) (or 1b) Does it matter whether it's a repo-level runner vs org-level runner?
(3) Do people care about the userbenchmarks in torchbench/are there owners (other than me)? If not, we should do a big code delete! cc @drisspg for torchao, @pytorch-dev-infra for nightly-V3.
(4) Is torchbench userbenchmarks still the recommended path for writing recurring benchmarks? Where are folks moving to? cc @xuzhao9
### What I ultimately care about
That my optim benchmarks will have a home to keep running, that they won't surprisingly break on me if we could help it!
| true
|
2,739,140,405
|
Add tests for non divisible inputs for flex decoding
|
joydddd
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 6
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,739,120,886
|
Get rid of _lazy_import hack
|
ezyang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143213
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,739,077,043
|
[CI] Add Triton 3.13t build
|
malfet
|
closed
|
[
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
By just extending the matrix and invoking script with appropriate cpython runtime
| true
|
2,739,023,586
|
[dynamo] disable eval frame callback around most of _TorchDynamoContext wrapper function
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143211
Internal xref: https://fb.workplace.com/groups/1075192433118967/permalink/1559636954674510/
If the `_fn` returned by `_TorchDynamoContext.__call__` makes an external function call, dynamo is recursively invoked. This can cause issues if there are added calls that are not skipped by Dynamo. So we should disable the eval frame callback as much as possible.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D67211749](https://our.internmc.facebook.com/intern/diff/D67211749)
| true
|
2,738,969,965
|
[2/N][Memory Profiling] Record memory allocation/free
|
mzzchy
|
closed
|
[
"Stale"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Design Doc: https://fburl.com/gdoc/47zpuweb
Prototyping: D66469341
In this diff, we implement the logic to record, store and export memory trace which will be involved by mtia hooks later.
* Add RingBuffer<MTIATraceEntry> to mtia_allocator to store trace
* Implement record_trace() to add trace entry for allocation and free
* Add record_histroy_ as enablement flag of profiler and record_histroy() to toggle the state
To avoid the duplicate symbol error, we remove the python/combined_traceback from srcs of C_impl_cuda and add libtorch_memory_profiler to dependency.
Differential Revision: [D66776251](https://our.internmc.facebook.com/intern/diff/D66776251/)
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.