id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,839,195,742
|
support meta_tensor.to(device='cpu') under fake_mode
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: composability",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Fixing this is actually a bit annoying:
(1) FakeTensorMode sees a function where all of its inputs are real tensors, so it tries to run the real compute before converting the output to a FakeTensor
(2) we don't actually want this, because the "real compute" is support to error normally, when you do `meta_tensor.to(device='cpu')`. Instead, we want FakeTensor to actually skip constant prop and run the normal FakeTensor implementation, which will not error
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #133044
* #146731
* __->__ #146729
* #146642
| true
|
2,839,188,491
|
[StaticRuntime] Fix a bug that memory planner ignores subblocks
|
coufon
|
closed
|
[
"oncall: jit",
"fb-exported",
"release notes: jit"
] | 9
|
CONTRIBUTOR
|
Summary: When Static Runtime graph node has sub-blocks, the memory planner does not consider sub-blocks' inputs as a node's input in memory planner. As the result, such nodes' inputs' lifetime is incorrect and corresponding tensor memory is released earlier than required and causes errors.
Differential Revision: D69195886
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,839,169,365
|
Rename PrimHOPBase to BaseHOP + minor changes
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: foreach_frontend",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146730
* __->__ #146727
This PR:
- renames PrimHOPBase to BaseHOP
- changes the backward pass to always return a tuple (to match the
forward pass).
Test Plan:
- tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,839,166,213
|
[ez][BE] get rid of the extra printf('\n')
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary: as title
Test Plan:
```
AOT_INDUCTOR_DEBUG_INTERMEDIATE_VALUE_PRINTER=3 TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 TORCHINDUCTOR_ABI_COMPATIBLE=1 TORCH_COMPILE_DEBUG=1 TORCH_LOGS="+graph, inductor, +schedule, output_code" buck2 run -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=h100a @//mode/opt fbcode//caffe2/test/inductor:test_aot_inductor -- -r test_addmm_cuda
```
Differential Revision: D69328701
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,839,124,193
|
remove incorrect warnings from min/max documentation
|
ngimel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,839,084,980
|
Add support for flexattention + int64 indexing
|
Chillee
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
Today FlexAttention fails if you pass in too big of an input (since it'll need 64 bit indexing)
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @drisspg @yanboliang @BoyuanFeng
| true
|
2,839,077,535
|
torch: Log a unified waitcounter for torch.compile and triton.autotune
|
c00w
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Summary: Add a second more generic waitcounter to torch.compile. We'll keep expanding this as new generic pytorch compilation sites show up.
Test Plan: Waitcounter only change, relying on existing tests.
Differential Revision: D69215401
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,839,003,336
|
Test on in-graph constructed NJTs
|
jbschlosser
|
open
|
[
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146722
* #146721
A recent set of bugs has been cropping up related to NJTs that constructed in-graph within a compiled function. This exercises different paths related to symbolic nested ints, etc. Some examples:
* #145874
* #146644
To get ahead of these, we should do NJT testing for this case as well.
This PR parametrizes the OpInfo tests for compile + forward to cover both in-graph constructed NJT and normal input cases. TBD what fails..
TODO:
* Do this for compile + backward tests also (?)
| true
|
2,839,003,246
|
Use inductor backend for NJT compile tests
|
jbschlosser
|
open
|
[
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146722
* __->__ #146721
We've been using `backend="aot_eager_decomp_partition"` for NJT compile testing, but this can let inductor bugs slip through. This PR switches the compile tests to use `backend="inductor"`; let's see if test runtime is an issue after this.
| true
|
2,838,998,538
|
[ca] remove private API: _compiled_autograd_should_lift
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: dynamo"
] | 12
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146735
* __->__ #146720
Since the functional autograd + compiled autograd migration, we don't trace into nodes anymore, and everything is lifted. We can't support this flag which tries to inline make_fx style in CA initial pass. There's no more usage internally.
| true
|
2,838,985,012
|
Operations that fail under `torch.export.export`(`torch.autograd.grad`) -> `torch.compile`
|
cw-tan
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 1
|
NONE
|
### 🐛 Describe the bug
Not completely a `torch.compile` problem, more of a `torch.export.export` problem when used with `torch.autograd.grad`. We have seen minimal working examples of `torch.export.export` working for code that includes `x.requires_grad() ... torch.autograd.grad(y, [x])`, but some operations don't get exported correctly. Here is a minimal script that fails on PyTorch nightly 2.7.0 for a bunch of operations found so far (and a minimal one that succeeds).
```
import torch
class Model(torch.nn.Module):
def __init__(self, num_channels):
super().__init__()
self.weight = torch.nn.Parameter(torch.randn(num_channels))
def forward(self, x, y):
x.requires_grad_()
# this works
# z = (x * y * self.weight).square().sum()
# 1. `sqrt` fails
z = (x * y * self.weight).square().sum().sqrt()
# 2. `rsqrt` fails
# z = (x * y * self.weight).square().sum().rsqrt()
# 3. `exp` fails
# z = (x * y * self.weight).square().sum().exp()
# 4. `torch.linalg.norm` fails
# z = torch.linalg.norm(x * self.weight, dim=-1).sum()
# 5. using `matmul` with broadcasting fails
# note that this doesn't use `self.weight` and would error out anyway if we reach `loss.backward()`, but is here to show the error during `export`
# b1n, bn1 -> b11
# z = torch.matmul(x.unsqueeze(1), y.unsqueeze(2)).square().sum()
# 6. using `torch.nn.functional.linear` fails
# z = torch.nn.functional.linear(x, torch.outer(self.weight, self.weight), None).sum()
grad = torch.autograd.grad(z, [x])[0]
return grad
device = "cuda"
num_batch = 512
num_channels = 256
x = torch.randn(num_batch, num_channels, dtype=torch.float32, device=device)
y = torch.randn(num_batch, num_channels, dtype=torch.float32, device=device)
model = Model(num_channels).to(device=device)
eager_out = model(x, y)
print(eager_out)
batch_dim = torch.export.Dim("batch", min=1, max=1024)
exported = torch.export.export(
model,
(
x,
y,
),
strict=False,
dynamic_shapes={"x": {0: batch_dim}, "y": {0: batch_dim}},
)
model = torch.compile(exported.module())
out = model(x, y)
loss = out.square().mean()
loss.backward()
print(model.weight.grad)
```
### Error logs
The error signature for all these cases look like the following. I'm guessing the backward pass of these problematic operations are not very `export` friendly?
```
/home/cwtan/micromamba/envs/torch-nightly/lib/python3.11/site-packages/torch/export/_unlift.py:81: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
getattr_node = gm.graph.get_attr(lifted_node)
/home/cwtan/micromamba/envs/torch-nightly/lib/python3.11/site-packages/torch/fx/graph.py:1790: UserWarning: Node lifted_tensor_0 target lifted_tensor_0 lifted_tensor_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/cwtan/micromamba/envs/torch-nightly/lib/python3.11/site-packages/torch/export/_unlift.py:330: UserWarning: A model attribute `lifted_tensor_0` requires gradient. but it's not properly registered as a parameter. torch.export will detach it and treat it as a constant tensor but please register it as parameter instead.
...
File "/home/cwtan/micromamba/envs/torch-nightly/lib/python3.11/site-packages/torch/_guards.py", line 1054, in detect_fake_mode
assert fake_mode is m, (
^^^^^^^^^^^^^^
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: fake mode (<torch._subclasses.fake_tensor.FakeTensorMode object at 0x7fd3f1342fd0>) from tracing context 0 doesn't match mode (<torch._subclasses.fake_tensor.FakeTensorMode object at 0x7fd3f3d29850>) from fake tensor input 3
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250207+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A1000 6GB Laptop GPU
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250207+cu124
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,838,935,121
|
[CUDAGraph] add skip message for unbacked symint
|
BoyuanFeng
|
open
|
[
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Add explicit skip message for unbacked symint in cudagraph, as suggested by @bdhirsh.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,838,922,626
|
[BE][cuDNN] cuDNN to 9.7.1.26 for CUDA 12.8
|
eqy
|
open
|
[
"module: build",
"module: cudnn",
"triaged",
"open source",
"Stale",
"ciflow/binaries",
"topic: not user facing",
"topic: build"
] | 7
|
COLLABORATOR
|
cuDNN 9.7.1 is out now and is expected to be the longer-lived branch with more potential backports vs. 9.7.0
CC @nWEIdia @tinglvv
cc @malfet @seemethere @csarofeen @ptrblck @xwang233
| true
|
2,838,859,306
|
[BE] Remove outdated RPC benchmark
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: distributed (rpc)",
"skip-pr-sanity-checks"
] | 4
|
CONTRIBUTOR
|
We have lots of outdated unused + uncalled code in our codebase, namely in our benchmarks and examples folders among others. The last change to this directory was 4 years ago and this code looks dead. cc @albanD @H-Huang for feedback
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146716
| true
|
2,838,811,832
|
[export][ez] Allow math.trunc for serialization.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Summary: as title.
Test Plan: CI
Differential Revision: D69317084
| true
|
2,838,744,082
|
[hop] Support more output types for `flat_apply`
|
StrongerXi
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147572
* #147571
* #146950
* #146367
* __->__ #146714
This patch enables `flat_apply` to support certain non-Tensor output
types like containers and graphable types. This will in turn enable the
upcoming `mark_traceable` to support more output types.
The patch also exposes a `func_to_graphable` rather than having the
users calling the lower level `pytree.flatten(ConstantFunction(...))`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,838,743,928
|
[dynamo][fx] Support dataclass whose fields have `init=False`
|
StrongerXi
|
closed
|
[
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146950
* #146367
* #146714
* __->__ #146713
* #147152
* #147145
Previously Dynamo and FX have code paths that reconstruct a dataclass
instance based on its type and fields; however they weren't taking
`init=False` into account (which is supposed to exclude the field from
constructor).
This patch fixes that, and also updates `pytree.LeafSpec` so that its
`__init__` conforms with the `init` attribute of its fields. Without
this change, the aforementioned reconstruction logic would fail.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,838,701,506
|
torch/_inductor/cpp_builder.py : _is_gcc Function Incorrectly Classifies clang++ as g++
|
ankushjqc
|
closed
|
[
"triaged",
"actionable",
"oncall: cpu inductor"
] | 4
|
NONE
|
Description: The _is_gcc function is intended to identify whether a given C++ compiler is GCC. However, it incorrectly classifies clang++ as g++ due to the current regular expression used in the function, which in turn lead to incorporation of wrong compiler option when using clang (for example fno-tree-loop-vectorize is not recognized by clang version 14.0.0
clang: error: unknown argument: '-fno-tree-loop-vectorize'
Steps to Reproduce:
Use the _is_gcc function with clang++ as the input.
Observe that the function returns True, indicating that clang++ is classified as g++.
Expected Behavior: The function should return False when clang++ is provided as the input, as clang++ is not GCC.
Actual Behavior: The function returns True when clang++ is provided as the input.
```
>>> import re
>>> from torch._inductor import cpp_builder
>>> cpp_builder.get_cpp_compiler()
'/usr/bin/clang++'
>>> cpp_builder._is_gcc(cpp_builder.get_cpp_compiler())
**True**
>>> bool(re.search(r"(gcc|g\+\+)", '/usr/bin/clang++'))
True
>>> cpp_builder._get_optimization_cflags(cpp_builder.get_cpp_compiler())
['O3', 'DNDEBUG', 'fno-trapping-math', 'funsafe-math-optimizations', 'ffinite-math-only', 'fno-signed-zeros', 'fno-math-errno', 'fexcess-precision=fast', 'fno-finite-math-only', 'fno-unsafe-math-optimizations', 'ffp-contract=off', 'fno-tree-loop-vectorize', 'march=native']
```
To ensure that the regex does not match "clang++" when looking for "g++", we can add word boundaries (\b) around "gcc" and "g++".
This will ensure that the regex matches only whole words and not substrings within other words.
Here’s the updated regex:
r"**\b**(gcc|g\+\+)**\b**"
```
def _is_gcc(cpp_compiler: str) -> bool:
if sys.platform == "darwin" and _is_apple_clang(cpp_compiler):
return False
return bool(re.search(r"\b(gcc|g\+\+)\b", cpp_compiler))
# Example usage
print(_is_gcc('/usr/bin/clang++')) # Output: False (correct)
>>> bool(re.search(r"\b(gcc|g\+\+)\b", '/usr/bin/clang++'))
False
```
| true
|
2,838,686,911
|
Add different padding modes to torch.nn.utils.rnn.pad_sequence
|
nikas-belogolov
|
open
|
[
"module: nn",
"module: rnn",
"triaged"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
Add different padding modes to torch.nn.utils.rnn.pad_sequence like reflect, replicate and circular
I have sequences which I want to pad using replication, but needed to create a custom function for it.
### Alternatives
```python
def pad_sequence_replicate(sequences):
max_len = max([seq.size(0) for seq in sequences])
padded_sequences = []
for s in sequences:
if s.size(0) == max_len:
padded_sequences.append(s)
continue
pad_amount = max_len - s.size(0)
s = F.pad(s.unsqueeze(0), (0, 0, 0, pad_amount), "replicate")
padded_sequences.append(s)
return torch.cat(padded_sequences)
```
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,838,633,337
|
[MTIA] (3/n) Implement PyTorch APIs to query/reset device peak memory usage
|
chaos5958
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Summary: Public summary (shared with Github): This diff implements a C++-Python binding to enable `reset_peak_memory_stats`.
Test Plan: The test is implemented in the following diff.
Reviewed By: yuhc
Differential Revision: D68988673
| true
|
2,838,603,359
|
FSDP: avoid resetting version counter of all_gather_output in inference_mode
|
bdhirsh
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
FSDP needs to hide VC bumps on its allgather buffer, but it does not need to do this is the allgather buffer was generated under inference mode.
more details here: https://www.internalfb.com/diff/D69115649?dst_version_fbid=1316814572779281&transaction_fbid=849120230625711
Test Plan: CI
Differential Revision: D69311496
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,838,555,697
|
Periodic Activations Module
|
GulkoA
|
open
|
[
"module: nn",
"triaged"
] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
Periodic activation functions were proposed in 2020 by [V. Sitzmann et al](http://arxiv.org/abs/2006.09661) and have since been used in numerous Implicit Neural Representation publications ([pi-GAN](https://arxiv.org/abs/2012.00926), [Compressive Neural Representations](https://arxiv.org/abs/2104.04523), [MINER](https://arxiv.org/abs/2202.03532), [Neural Stream Functions](https://ieeexplore.ieee.org/document/10148500), [3D Keypoint Estimation](https://onlinelibrary.wiley.com/doi/10.1111/cgf.14917), [3DNS](https://arxiv.org/abs/2209.13971), [ECRN](https://arxiv.org/abs/2311.12831), and more).
I would like to propose adding a periodic activation function (sin/cos) as a high-level module to PyTorch
### Alternatives
Currently, all projects involving periodic activation functions define their own modules, using `torch.sin` in forward method, but this creates a lot of inconsistency and repetition across all neural compressors codebases.
### Additional context
Adding a dedicated `nn.Module` to implement it would help standardize implementations used by researchers and encourage research.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,838,543,202
|
AutoAC can cause deadlocks with tensor parallelism and data-dependent flop formulas
|
lw
|
open
|
[
"module: activation checkpointing",
"triaged",
"oncall: pt2",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Some operators' flop formulas depend on the content of their inputs, such as [FlashAttention when using a NestedTensor](https://github.com/pytorch/pytorch/blob/5d7532140f377195a831942f294c1f07c589bd9c/torch/utils/flop_counter.py#L298) (a.k.a varying sequence lengths, document-causal, block-diagonal, ...). The partitioner's AutoAC mode (enabled with `torch._functorch.config.activation_memory_budget < 1`) makes use of these formulas to choose which operator to checkpoint-vs-recompute in the backward.
This is already somewhat problematic (as it will "freeze" the decisions made with the first invocation's data for all subsequent invocations), but it becomes worse when the graph contains collectives, such as when using tensor/sequence parallelism. In such cases the partitioner might also introduce/suppress _collective_ ops from the backward graph, but if ranks have different data this could lead to inconsistent decisions and thus a desynchronization (a deadlock).
This is not just theoretical: it has occurred to us in practice, and here is a standalone repro:
```py
import tempfile
from datetime import timedelta
from functools import partial
import torch
import torch.distributed._functional_collectives as funcol
import torch.utils.flop_counter
from torch._inductor.utils import run_and_get_code
TOTAL_SEQLEN = 8192
NUM_HEADS = 8
HEAD_DIM = 128
EMB_DIM = NUM_HEADS * HEAD_DIM
HIDDEN_DIM = EMB_DIM * 2
def attn(xq, xk, xv, seqstarts, max_seqlen):
return torch.ops.aten._flash_attention_forward(
xq.unflatten(-1, (-1, HEAD_DIM)),
xk.unflatten(-1, (-1, HEAD_DIM)),
xv.unflatten(-1, (-1, HEAD_DIM)),
seqstarts,
seqstarts,
max_seqlen,
max_seqlen,
0.0,
True,
return_debug_mask=False,
scale=HEAD_DIM ** -0.5,
window_size_left=-1,
window_size_right=-1,
seqused_k=None,
alibi_slopes=None,
)[0].flatten(1, 2)
def scale(t: torch.Tensor, amax_t: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
max_v = torch.finfo(torch.float8_e4m3fn).max
scale_t = torch.clamp(amax_t.float(), min=1e-12) / max_v
t_fp8 = (t / scale_t).to(torch.float8_e4m3fn)
return t_fp8, scale_t
def fp8_matmul(
first: torch.Tensor,
amax_first: torch.Tensor,
second_t: torch.Tensor,
amax_second_t: torch.Tensor,
parallel: str,
) -> torch.Tensor:
tp_group = torch.distributed.group.WORLD
first_fp8, scale_first = scale(first, amax_first)
second_t_fp8, scale_second_t = scale(second_t, amax_second_t)
if parallel == "col":
first_fp8 = funcol.all_gather_tensor(first_fp8, gather_dim=0, group=tp_group)
scale_first = funcol.all_gather_tensor(scale_first, gather_dim=0, group=tp_group)
res = torch._scaled_mm(
first_fp8,
second_t_fp8.t(),
scale_a=scale_first,
scale_b=scale_second_t.t(),
out_dtype=torch.bfloat16,
)
if parallel == "row":
res = funcol.reduce_scatter_tensor(res, "sum", scatter_dim=0, group=tp_group)
return res
REVERSE = {"col": "row", "row": "col"}
@torch.compiler.allow_in_graph
class Fp8LinearFn(torch.autograd.Function):
@staticmethod
def forward(
ctx: torch.autograd.function.FunctionCtx,
a: torch.Tensor,
b_t: torch.Tensor,
parallel: str,
) -> torch.Tensor:
amax_a = a.abs().amax(dim=-1, keepdim=True)
amax_b_t = b_t.abs().amax(dim=-1, keepdim=True)
out = fp8_matmul(a, amax_a, b_t, amax_b_t, parallel)
ctx.save_for_backward(a, b_t, amax_b_t)
ctx.parallel = parallel
return out
@staticmethod
def backward(
ctx: torch.autograd.function.FunctionCtx, grad_out: torch.Tensor
) -> tuple[torch.Tensor, torch.Tensor, None]:
a: torch.Tensor
b_t: torch.Tensor
amax_b_t: torch.Tensor
a, b_t, amax_b_t = ctx.saved_tensors
parallel = REVERSE[ctx.parallel]
# Workaround for https://github.com/pytorch/pytorch/issues/141881.
b_t = b_t + grad_out[0, :, None]
b = b_t.t().contiguous()
amax_grad_out = grad_out.abs().amax(dim=-1, keepdim=True)
amax_b = amax_b_t.t().amax(dim=-1, keepdim=True)
amax_b = amax_b.repeat_interleave(
b.shape[0] // amax_b.shape[0], dim=0, output_size=b.shape[0]
)
grad_a = fp8_matmul(
grad_out, amax_grad_out, b, amax_b, parallel
)
tp_group = torch.distributed.group.WORLD
if parallel == "col":
grad_out = funcol.all_gather_tensor(grad_out, gather_dim=0, group=tp_group)
if parallel == "row":
a = funcol.all_gather_tensor(a, gather_dim=0, group=tp_group)
grad_b = grad_out.t() @ a
return grad_a, grad_b, None
@torch.compile(fullgraph=True)
def layer(x, wq, wk, wv, wo, seqstarts, max_seqlen):
# y = funcol.all_gather_tensor_autograd(x, gather_dim=0, group=torch.distributed.group.WORLD)
y = Fp8LinearFn.apply(
attn(
Fp8LinearFn.apply(x, wq, "col"),
Fp8LinearFn.apply(x, wk, "col"),
Fp8LinearFn.apply(x, wv, "col"),
seqstarts,
max_seqlen
),
wo,
"row"
)
# y = funcol.reduce_scatter_tensor_autograd(y, "sum", scatter_dim=0, group=torch.distributed.group.WORLD)
return x + y
def run(rank: int, world_size: int, rdv_dir: str) -> None:
torch.manual_seed(0)
torch.cuda.set_device(rank)
torch.distributed.init_process_group(
backend="nccl",
rank=rank,
world_size=world_size,
init_method=f"file://{rdv_dir}/rdv",
timeout=timedelta(seconds=10),
)
torch._functorch.config.activation_memory_budget = 0.8005
torch._dynamo.reset_code_caches()
if rank == 0:
# Least flops
seqlens = [TOTAL_SEQLEN // 2, TOTAL_SEQLEN // 2]
else:
# Most flops
seqlens = [TOTAL_SEQLEN - 128, 128]
assert sum(seqlens) == TOTAL_SEQLEN
max_seqlen = max(seqlens)
seqlens = torch.tensor(seqlens)
seqstarts = (seqlens.cumsum(dim=0) - seqlens).to(torch.int32).cuda()
wq = torch.randn((EMB_DIM // world_size, EMB_DIM), dtype=torch.bfloat16, device="cuda")
wk = torch.randn((EMB_DIM // world_size, EMB_DIM), dtype=torch.bfloat16, device="cuda")
wv = torch.randn((EMB_DIM // world_size, EMB_DIM), dtype=torch.bfloat16, device="cuda")
wo = torch.randn((EMB_DIM, EMB_DIM // world_size), dtype=torch.bfloat16, device="cuda")
in_ = torch.randn((TOTAL_SEQLEN // world_size, EMB_DIM), dtype=torch.bfloat16, device="cuda")
grad_out = torch.randn((TOTAL_SEQLEN // world_size, EMB_DIM), dtype=torch.bfloat16, device="cuda")
(
(out, (grad_in, grad_wq, grad_wk, grad_wv, grad_wo)),
(fwd_code, bwd_code),
) = run_and_get_code(
torch.autograd.functional.vjp,
partial(layer, seqstarts=seqstarts, max_seqlen=max_seqlen),
(in_, wq, wk, wv, wo),
grad_out,
)
print(f"{rank=} {bwd_code.count("torch.ops._c10d_functional.all_gather_into_tensor.default")}")
torch.distributed.destroy_process_group()
def main() -> None:
world_size = 2
with tempfile.TemporaryDirectory() as rdv_dir:
torch.multiprocessing.spawn(
run,
args=(world_size, rdv_dir),
nprocs=world_size,
)
if __name__ == "__main__":
main()
```
I believe that in a distributed setting AutoAC should use the "worst-case" flops in order to do its decisions, because:
- it will avoid deadlocks
- it will avoid the graph depending on the content of the data on the first invocation (flakiness, unpredictability)
- when using more and more GPUs, it will become more likely for one of them to hit the worst case (or very close to it), but since all GPUs go in lockstep they will anyways all pay that price, hence they should use that when planning
### Error logs
_No response_
### Versions
PyTorch nightly from PyPI, version `2.7.0.dev20250120+cu126`
cc @soulitzer @chauhang @penguinwu @zou3519 @bdhirsh @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,838,532,178
|
cpp_wrapper: persist autotune example tensors until last use
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149350
* #147225
* __->__ #146706
Patches over an issue where randomly generated example tensors can cause kernel autotuning to fail, when those tensors would not be possible outputs from previous kernels in the sequence. This fixes a failure in `test_torchinductor_opinfo.py` when run with compile-time autotuning, `test_comprehensive_nanquantile_cuda_float64`.
For clarity, the situation triggering this PR looks like kernels `A -> BCDE -> F` (`BCDE` is fused), where one of the outputs from `A` is a boolean tensor describing some of the input data. Previously, we randomly regenerated that boolean tensor and the input data before passing them to `BCDE`, so that they no longer matched. This caused a `tl.device_assert` call in `BCDE` to fail. With this PR, we reuse the random data input to `A` and the output Boolean tensor, such that they match and pass the device assertion in `BCDE`.
Fixes #147799.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,838,426,132
|
Remove NO_MULTIPROCESSING_SPAWN checks
|
cyyever
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (torchelastic)",
"ci-no-td"
] | 13
|
COLLABORATOR
|
py 3.9 has spawn.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,838,413,744
|
`SDPA`: `EFFICIENT_ATTENTION / FLASH_ATTENTION` backend, batch dim limited to 2**16-1 (CUDA error: invalid configuration argument)
|
Annusha
|
open
|
[
"module: cuda",
"triaged",
"module: sdpa"
] | 0
|
NONE
|
### 🐛 Describe the bug
As per title, see the repro
```python
import torch
import torch.nn.functional as F
from torch.nn.attention import sdpa_kernel
from torch.nn.attention import SDPBackend
def foo(batch_dim, backend):
query = torch.rand([batch_dim, 8, 4, 64], device='cuda', dtype=torch.float16)
key = torch.rand([batch_dim, 8, 4, 64], device='cuda', dtype=torch.float16)
value = torch.rand([batch_dim, 8, 4, 64], device='cuda', dtype=torch.float16)
with sdpa_kernel(backends=[backend]):
F.scaled_dot_product_attention(query,key,value)
foo(2**16-1, SDPBackend.EFFICIENT_ATTENTION)
foo(2**16, SDPBackend.EFFICIENT_ATTENTION)
foo(2**16-1, SDPBackend.FLASH_ATTENTION)
foo(2**16, SDPBackend.FLASH_ATTENTION)
```
which produces the following output:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[13], line 2
1 foo(2**16-1, SDPBackend.EFFICIENT_ATTENTION)
----> 2 foo(2**16, SDPBackend.EFFICIENT_ATTENTION)
Cell In[12], line 6, in foo(batch_dim, backend)
4 value = torch.rand([batch_dim, 8, 4, 64], device='cuda', dtype=torch.float16)
5 with sdpa_kernel(backends=[backend]):
----> 6 F.scaled_dot_product_attention(query,key,value)
RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[14], line 2
1 foo(2**16-1, SDPBackend.FLASH_ATTENTION)
----> 2 foo(2**16, SDPBackend.FLASH_ATTENTION)
Cell In[12], line 6, in foo(batch_dim, backend)
4 value = torch.rand([batch_dim, 8, 4, 64], device='cuda', dtype=torch.float16)
5 with sdpa_kernel(backends=[backend]):
----> 6 F.scaled_dot_product_attention(query,key,value)
RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
I tested with pytorch 2.5 and 2.7 (nightly)
Thanks @nikitaved for isolating the issue
### Versions
```
PyTorch version: 2.7.0.dev20250207+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: 14.0.6
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.1.116.1.amd64-smp-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 NVL
Nvidia driver version: 550.54.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9554 64-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 44%
CPU max MHz: 3762.9880
CPU min MHz: 1500.0000
BogoMIPS: 6191.22
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 128 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250207+cu124
[pip3] torchaudio==2.6.0.dev20250207+cu124
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.1
[pip3] triton==3.2.0
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cupti 12.4.127 he02047a_2 conda-forge
[conda] cuda-libraries 12.4.1 ha770c72_1 conda-forge
[conda] cuda-nvrtc 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx 12.4.127 he02047a_2 conda-forge
[conda] cuda-opencl 12.4.127 he02047a_1 conda-forge
[conda] cuda-runtime 12.4.1 ha804496_0 conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.4.5.8 he02047a_2 conda-forge
[conda] libcufft 11.2.1.3 he02047a_2 conda-forge
[conda] libcurand 10.3.5.147 he02047a_2 conda-forge
[conda] libcusolver 11.6.1.9 he02047a_2 conda-forge
[conda] libcusparse 12.3.1.170 he02047a_2 conda-forge
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.4.127 he02047a_2 conda-forge
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.26.4 py311h64a7726_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250207+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250207+cu124 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.20.1 py311_cu124 pytorch
[conda] triton 3.2.0 pypi_0 pypi
```
cc @ptrblck @msaroufim @eqy
| true
|
2,838,407,583
|
[MPSInductor] Implement Welford reduction
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146703
Still work in progress, though fallback works as expected, but custom shader is not
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,838,402,072
|
OSX Arm64 cross-compilation of pytorch extensions fails with conda
|
stefdoerr
|
closed
|
[
"module: cpp-extensions",
"triaged",
"module: third_party"
] | 6
|
NONE
|
### 🐛 Describe the bug
The current (2.5.1) pytorch version includes the following function in `cpp_extension.py`. The third path being added to the list in `paths` is not being treated for CONDA like `lib_include` is. I'm not entirely sure if this is on purpose or a mistake but this causes compilation issues demonstrated here: https://github.com/conda-forge/pytorch_scatter-feedstock/pull/67
```py
def include_paths(cuda: bool = False) -> List[str]:
"""
Get the include paths required to build a C++ or CUDA extension.
Args:
cuda: If `True`, includes CUDA-specific include paths.
Returns:
A list of include path strings.
"""
lib_include = os.path.join(_TORCH_PATH, 'include')
if os.environ.get("CONDA_BUILD", None) is not None:
pieces = [os.environ["PREFIX"]] + IS_WINDOWS * ["Library"] + ["include"]
lib_include = os.path.join(*pieces)
elif os.environ.get("CONDA_PREFIX", None) is not None:
pieces = [os.environ["CONDA_PREFIX"]] + IS_WINDOWS * ["Library"] + ["include"]
lib_include = os.path.join(*pieces)
paths = [
lib_include,
# Remove this once torch/torch.h is officially no longer supported for C++ extensions.
os.path.join(lib_include, 'torch', 'csrc', 'api', 'include'),
# add site-packages/torch/include again (`lib_include` may have been pointing to
# $PREFIX/include), as some torch-internal headers are still in this directory
os.path.join(_TORCH_PATH, 'include'),
]
if cuda and IS_HIP_EXTENSION:
paths.append(os.path.join(lib_include, 'THH'))
paths.append(_join_rocm_home('include'))
elif cuda:
cuda_home_include = _join_cuda_home('include')
# if we have the Debian/Ubuntu packages for cuda, we get /usr as cuda home.
# but gcc doesn't like having /usr/include passed explicitly
if cuda_home_include != '/usr/include':
paths.append(cuda_home_include)
# Support CUDA_INC_PATH env variable supported by CMake files
if (cuda_inc_path := os.environ.get("CUDA_INC_PATH", None)) and \
cuda_inc_path != '/usr/include':
paths.append(cuda_inc_path)
if CUDNN_HOME is not None:
paths.append(os.path.join(CUDNN_HOME, 'include'))
return paths
```
Practically you get class redefinition errors:
```
arm64-apple-darwin20.0.0-clang++ -ftree-vectorize -fPIC -fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden -fmessage-length=0 -isystem /Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include -fdebug-prefix-map=/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/work=/usr/local/src/conda/pytorch_scatter-2.1.2 -fdebug-prefix-map=/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla=/usr/local/src/conda-prefix -D_FORTIFY_SOURCE=2 -isystem /Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include -mmacosx-version-min=11.0 -mmacosx-version-min=11.0 -DWITH_PYTHON -Icsrc -I/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include -I/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include/torch/csrc/api/include -I/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_build_env/venv/lib/python3.10/site-packages/torch/include -I/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include/python3.10 -c csrc/cpu/scatter_cpu.cpp -o build/temp.macosx-11.0-arm64-cpython-310/csrc/cpu/scatter_cpu.o -O3 -Wno-sign-compare -arch arm64 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=_scatter_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++17
In file included from csrc/cpu/scatter_cpu.cpp:1:
In file included from csrc/cpu/scatter_cpu.h:3:
In file included from csrc/cpu/../extensions.h:2:
In file included from /Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include/torch/csrc/api/include/torch/torch.h:3:
In file included from /Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include/torch/csrc/api/include/torch/all.h:16:
In file included from /Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include/torch/csrc/api/include/torch/nn.h:3:
In file included from /Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include/torch/csrc/api/include/torch/nn/cloneable.h:3:
In file included from /Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include/torch/csrc/api/include/torch/nn/module.h:6:
/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include/torch/csrc/api/include/torch/ordered_dict.h:13:7: error: redefinition of 'OrderedDict'
13 | class OrderedDict {
| ^
/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738826150855/_build_env/venv/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/ordered_dict.h:13:7: note: previous definition is here
13 | class OrderedDict {
| ^
```
because the compilation command contains the following three include paths, there the third is a different env
```
'/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738939003571/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include',
'/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738939003571/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include/torch/csrc/api/include',
'/Users/runner/miniforge3/conda-bld/pytorch_scatter_1738939003571/_build_env/venv/lib/python3.10/site-packages/torch/include'
```
### Versions
cc @malfet @zou3519 @xmfan
| true
|
2,838,387,074
|
Unable to Import PyTorch After Upgrade in Docker Environment
|
unbreading
|
open
|
[
"triaged",
"module: docker"
] | 2
|
NONE
|
### 🐛 Describe the bug
### Problem Description
Before upgrade:
<img width="838" alt="Image" src="https://github.com/user-attachments/assets/8950b303-510f-4136-b44d-8a3b55185941" />
After upgrade (by `pip install torch==2.4.0`)
<img width="837" alt="Image" src="https://github.com/user-attachments/assets/25a81dc3-8202-4dbc-98ec-71a11c87e0cf" />
I found that libcudnn.so.9 exists in the path /usr/local/lib/python3.10/site-packages/nvidia/cudnn/lib/, but it was not being loaded. To resolve this, I modified the _load_global_deps() function in torch/__init__.py by adding line 227 and deleting lines 245-246, as shown in the attached image. After these changes, I was able to successfully import torch.
<img width="935" alt="Image" src="https://github.com/user-attachments/assets/3405e706-3522-4234-958f-64e9af5ffb10" />
### Questions
- Is this the right solution? Am I missing any steps?
- After upgrading PyTorch using this method, I encountered numerous 'undefined symbol' errors during the installation and import of vllm and flash-attn. Could these undefined symbol problems be related to the changes
### Versions
### Environment:
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: TencentOS Server 3.2 (Final) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Tencent 8.5.0-23)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.28
Python version: 3.10.8 (main, Oct 15 2022, 14:44:57) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10)] (64-bit runtime)
Python platform: Linux-5.4.119-1-tlinux4-0009.3-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: A100-SXM4-40GB
GPU 1: A100-SXM4-40GB
GPU 2: A100-SXM4-40GB
GPU 3: A100-SXM4-40GB
GPU 4: A100-SXM4-40GB
GPU 5: A100-SXM4-40GB
GPU 6: A100-SXM4-40GB
GPU 7: A100-SXM4-40GB
Nvidia driver version: 450.156.00
cuDNN version: Probably one of the following:
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8.9.4
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.4
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.4
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.4
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.4
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.4
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-254
Off-line CPU(s) list: 255
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 16
Vendor ID: AuthenticAMD
BIOS Vendor ID: Advanced Micro Devices, Inc.
CPU family: 25
Model: 1
Model name: AMD EPYC 7K83 64-Core Processor
BIOS Model name: AMD EPYC 7K83 64-Core Processor
Stepping: 1
CPU MHz: 3243.849
CPU max MHz: 2450.0000
CPU min MHz: 1500.0000
BogoMIPS: 4891.30
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-7,128-135
NUMA node1 CPU(s): 8-15,136-143
NUMA node2 CPU(s): 16-23,144-151
NUMA node3 CPU(s): 24-31,152-159
NUMA node4 CPU(s): 32-39,160-167
NUMA node5 CPU(s): 40-47,168-175
NUMA node6 CPU(s): 48-55,176-183
NUMA node7 CPU(s): 56-63,184-191
NUMA node8 CPU(s): 64-71,192-199
NUMA node9 CPU(s): 72-79,200-207
NUMA node10 CPU(s): 80-87,208-215
NUMA node11 CPU(s): 88-95,216-223
NUMA node12 CPU(s): 96-103,224-231
NUMA node13 CPU(s): 104-111,232-239
NUMA node14 CPU(s): 112-119,240-247
NUMA node15 CPU(s): 120-127,248-254
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.4.0
[pip3] torchaudio==2.2.0
[pip3] torchvision==0.17.0
[pip3] triton==3.0.0
[conda] Could not collect
| true
|
2,838,358,805
|
Cleanup CallOnce.h
|
cyyever
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,838,281,474
|
`#type: ignore` linter
|
rec
|
open
|
[
"module: typing",
"module: lint",
"triaged",
"better-engineering"
] | 0
|
COLLABORATOR
|
### The problem
Accurate Python type annotations are essential to building reliable and reusable software, but there are almost 5000 `#type: ignore` statements in Python files below `torch/`.
After going through several of hundred such statements, I'd say the majority have a simple type solution that doesn't need an expert in Python typing to come up with, and most of the rest still have a better solution than `#type: ignore` but might take some expertise with typing to actually get.
There's nothing to be done about the old annotations except fix them one at a time, but there is something automated that could be added without great effort that would discourage writing new `#type: ignore` statements.
### The solution: the `type_ignore_linter`
In the first step, every Python file in `torch/` is [`tokenized`](https://docs.python.org/3/library/tokenize.html), the number of `#type: ignore` statements is counted, and stored in a "type grandfather" fiile mapping filenames to ignore counts.
Then on each commit, the `type_ignore_linter` would load the grandfather file, see if the total count of `#type: ignore` had increased for any files in the commit, and if so, report an error.
Git approval access over the "type grandfather file" could be given to some Python typing annotation fans, with the hope that people would get stuck, ask for approval for a `# type: ignore` add, and instead be helped in writing correct typing.
And as usual there would have to be a way to easily opt a file or line out of this linter, for emergencies or pathological laziness.
### How much work would it be?
`tools/linters/adaptors` already has two linters that tokenize everything, one of which is a linter that counts "bad things in files" and stores them in a grandfather file. (The grandfather part is [under review](https://github.com/pytorch/pytorch/pull/145834).)
You'd want to write unit tests, productionize it as a linter, but if one had done such things already for other linters, it'd be one solid day of work.
The issue would be getting buy-in from everyone, as it would potentially affect any pull request. Having the ability to `noqa: ignore_linter` "to just shut it up" might be mitigating.
### How could it go wrong?
1. People with approval on the grandfather file might become a bottleneck
2. Alternatively, too many people writing to the grandfather might mean that the system becomes toothless.
3. Developers could fix typing elsewhere in the same file to add ignores to their code (could be worse).
## Alternatives
The alternative is the current state: allowing any developer to add `#type: ignore` statements at any point in the code while a few developers try to add typing. See https://en.wikipedia.org/wiki/Sisyphus
## Appendix: what the grandfather list would look like:
The code to count was so short I just wrote it standalone from bits of other things, it's [here](https://github.com/rec/test/blob/master/python/count_type_ignores.py).
```
4917 ignores
{
"torch/_inductor/ir.py": 232,
"torch/_inductor/fx_passes/group_batch_fusion.py": 121,
"torch/_inductor/fx_passes/split_cat.py": 100,
"torch/nested/_internal/ops.py": 100,
"torch/_refs/__init__.py": 87,
"torch/fx/experimental/symbolic_shapes.py": 64,
"torch/testing/_internal/common_utils.py": 59,
"torch/distributed/fsdp/_flat_param.py": 50,
"torch/_inductor/graph.py": 50,
"torch/fx/experimental/sym_node.py": 45,
"torch/ao/quantization/fx/convert.py": 42,
"torch/_inductor/codegen/cuda/gemm_template.py": 41,
"torch/_dynamo/symbolic_convert.py": 40,
"torch/nn/modules/conv.py": 35,
"torch/_inductor/fx_passes/efficient_conv_bn_eval.py": 33,
"torch/_inductor/codegen/cpp.py": 33,
... many more...
```
cc @ezyang @malfet @xuzhao9 @gramster
| true
|
2,838,237,423
|
Possible memory leak when running test_model_exports_to_core_aten.py test
|
AlekseiNikiforovIBM
|
open
|
[
"module: memory usage",
"oncall: pt2",
"oncall: export"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
Running test_model_exports_to_core_aten.py test multiple times results in test process consuming increasingly more memory.
Steps to reproduce on x86:
```
docker run -it fedora:latest
dnf update -y
dnf install -y python3-pip git gcc gcc-c++ rust cargo python3-devel
git clone --recurse-submodules https://github.com/pytorch/pytorch
cd pytorch
pip install torch torchvision
pip install -r ./.ci/docker/requirements-ci.txt
cd test
python3 -bb test_model_exports_to_core_aten.py -m 'not serial' --shard-id=2 --num-shards=2 -v -vv -rfEX -p no:xdist --use-pytest --flake-finder --flake-runs=1000 --import-slow-tests
```
This results in python3 consuming all available memory, and either it get's killed by OOM killer or daemon, or system hangs. For debugging purposes reruns count could be decreased from 1000 to some reasonably low number like 1, 2 or even 10.
I've found this issue while investigating s390x CI failures, and for some reason this test was repeated, and eventually killed by OOM killer, failing the test job.
### Versions
```
# python3 collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Fedora Linux 40 (Container Image) (x86_64)
GCC version: (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.8 (main, Dec 6 2024, 00:00:00) [GCC 14.2.1 20240912 (Red Hat 14.2.1-3)] (64-bit runtime)
Python platform: Linux-4.18.0-553.34.1.el8_10.x86_64-x86_64-with-glibc2.39
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
CPU family: 6
Model: 140
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 27%
CPU max MHz: 4700.0000
CPU min MHz: 400.0000
BogoMIPS: 5606.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 invpcid_single cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 5 MiB (4 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0
[pip3] optree==0.13.0
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,838,140,600
|
There is a performance drop because we have not yet implemented the batching rule for aten::mse_loss_backward.
|
f-fuchs
|
open
|
[
"triaged",
"module: batching",
"module: vmap",
"module: functorch"
] | 0
|
NONE
|
Hey,
I recently started using https://github.com/TorchJD/torchjd and now I get the following user warning:
```
/home/fuchsfa/demultiple/.venv/lib/python3.12/site-packages/torch/autograd/graph.py:823: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::mse_loss_backward. Please file us an issue on GitHub so that we can prioritize its implementation.
(Triggered internally at /pytorch/aten/src/ATen/functorch/BatchedFallback.cpp:81.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
```
Therefore I wanted to ask if there any plans to work on this in the near future.
cc @zou3519 @Chillee @samdow @kshitij12345
| true
|
2,837,938,657
|
Add full_like.default to list of ops with kwargs
|
Erik-Lundell
|
closed
|
[
"triaged",
"open source",
"release notes: quantization",
"release notes: AO frontend"
] | 4
|
NONE
|
The _maybe_insert_input_observers_for_node function expects ops, except a few exceptions, to have zero kwargs. full_like.default seems to be one of these cases and should therefore be added to the list.
Addresses https://github.com/pytorch/pytorch/issues/146621
Fixes #146621
| true
|
2,837,938,500
|
Enable Windows tests
|
cyyever
|
open
|
[
"module: windows",
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,837,926,509
|
Enable Windows tests
|
cyyever
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,837,869,932
|
Partitioner moves useless and memory-heavy op from bwd to fwd
|
lw
|
open
|
[
"oncall: distributed",
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This code reproduces a fp8 row-wise scaled FFN with sequence-parallelism support (derived from [here](https://github.com/facebookresearch/lingua/blob/main/lingua/float8.py)), and prints the forward Inductor graph as produced by the partitioner:
```py
import tempfile
import torch
import torch.distributed._functional_collectives as funcol
from torch._inductor.utils import run_and_get_code
def scale(t: torch.Tensor, amax_t: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
max_v = torch.finfo(torch.float8_e4m3fn).max
scale_t = torch.clamp(amax_t.float(), min=1e-12) / max_v
t_fp8 = (t / scale_t).to(torch.float8_e4m3fn)
return t_fp8, scale_t
def fp8_matmul(
first: torch.Tensor,
amax_first: torch.Tensor,
second_t: torch.Tensor,
amax_second_t: torch.Tensor,
parallel: str,
) -> torch.Tensor:
tp_group = torch.distributed.group.WORLD
first_fp8, scale_first = scale(first, amax_first)
second_t_fp8, scale_second_t = scale(second_t, amax_second_t)
if parallel == "col":
first_fp8 = funcol.all_gather_tensor(first_fp8, gather_dim=0, group=tp_group)
scale_first = funcol.all_gather_tensor(scale_first, gather_dim=0, group=tp_group)
res = torch._scaled_mm(
first_fp8,
second_t_fp8.t(),
scale_a=scale_first,
scale_b=scale_second_t.t(),
out_dtype=torch.bfloat16,
)
if parallel == "row":
res = funcol.reduce_scatter_tensor(res, "sum", scatter_dim=0, group=tp_group)
return res
REVERSE = {"col": "row", "row": "col"}
@torch.compiler.allow_in_graph
class Fp8LinearFn(torch.autograd.Function):
@staticmethod
def forward(
ctx: torch.autograd.function.FunctionCtx,
a: torch.Tensor,
b_t: torch.Tensor,
parallel: str,
) -> torch.Tensor:
amax_a = a.abs().amax(dim=-1, keepdim=True)
amax_b_t = b_t.abs().amax(dim=-1, keepdim=True)
out = fp8_matmul(a, amax_a, b_t, amax_b_t, parallel)
ctx.save_for_backward(a, b_t, amax_b_t)
ctx.parallel = parallel
return out
@staticmethod
def backward(
ctx: torch.autograd.function.FunctionCtx, grad_out: torch.Tensor
) -> tuple[torch.Tensor, torch.Tensor, None]:
a: torch.Tensor
b_t: torch.Tensor
amax_b_t: torch.Tensor
a, b_t, amax_b_t = ctx.saved_tensors
parallel = REVERSE[ctx.parallel]
# Workaround for https://github.com/pytorch/pytorch/issues/141881.
b_t = b_t + grad_out[0, :, None]
b = b_t.t().contiguous()
amax_grad_out = grad_out.abs().amax(dim=-1, keepdim=True)
amax_b = amax_b_t.t().amax(dim=-1, keepdim=True)
amax_b = amax_b.repeat_interleave(
b.shape[0] // amax_b.shape[0], dim=0, output_size=b.shape[0]
)
grad_a = fp8_matmul(
grad_out, amax_grad_out, b, amax_b, parallel
)
tp_group = torch.distributed.group.WORLD
if parallel == "col":
grad_out = funcol.all_gather_tensor(grad_out, gather_dim=0, group=tp_group)
if parallel == "row":
a = funcol.all_gather_tensor(a, gather_dim=0, group=tp_group)
grad_b = grad_out.t() @ a
return grad_a, grad_b, None
@torch.compile
def ffn(x: torch.Tensor, w1: torch.Tensor, w2: torch.Tensor) -> torch.Tensor:
x = Fp8LinearFn.apply(x, w1, "col")
x = torch.nn.functional.relu(x)
x = Fp8LinearFn.apply(x, w2, "row")
return x
def run(rank: int, world_size: int, rdv_dir: str) -> None:
torch.manual_seed(0)
torch.cuda.set_device(rank)
torch.distributed.init_process_group(
backend="nccl",
rank=rank,
world_size=world_size,
init_method=f"file://{rdv_dir}/rdv",
)
in_ = torch.randn((3072 // world_size, 4096), device="cuda", dtype=torch.bfloat16)
w1 = torch.randn((8192 // world_size, 4096), device="cuda", dtype=torch.bfloat16)
w2 = torch.randn((4096, 8192 // world_size), device="cuda", dtype=torch.bfloat16)
grad_out = torch.randn((3072 // world_size, 4096), device="cuda", dtype=torch.bfloat16)
(
(out, (grad_in, grad_w1, grad_w2)),
(fwd_code, bwd_code),
) = run_and_get_code(
torch.autograd.functional.vjp,
ffn,
(in_, w1, w2),
grad_out,
)
print(fwd_code)
torch.distributed.destroy_process_group()
def main() -> None:
world_size = torch.cuda.device_count()
with tempfile.TemporaryDirectory() as rdv_dir:
torch.multiprocessing.spawn(
run,
args=(world_size, rdv_dir),
nprocs=world_size,
)
if __name__ == "__main__":
main()
```
This is what we see at the end of the graph:
```py
# Topologically Sorted Source Nodes: [], Original ATen: [_c10d_functional.all_gather_into_tensor]
buf26 = torch.ops._c10d_functional.all_gather_into_tensor.default(primals_1, 8, '0')
assert_size_stride(buf26, (3072, 4096), (4096, 1))
# Topologically Sorted Source Nodes: [], Original ATen: [_c10d_functional.wait_tensor]
torch.ops._c10d_functional.wait_tensor.default(buf26)
del primals_1
return (buf21, primals_2, primals_3, buf13, reinterpret_tensor(buf24, (1, 1, 1), (1, 1, 1), 0), reinterpret_tensor(buf25, (1, 1, 1), (1, 1, 1), 0), buf26, )
```
This appears completely non-sensical: there's an all-gather that, in eager mode, is performed in the backward pass, that gets moved to the forward graph by the partitioner. The result of the all-gather is immediately returned, thus getting kept around as a checkpointed activation until the backward. The sensible thing to do would be to save the _input_ of the all-gather for backward, as it occupies 8 times less memory as the output, and perform the all-gather in the backward. I don't see what would push the partitioner to prefer putting this all-gather in the forward as there's no chance for that all-gather to be fused or overlapped with other ops in the forward.
### Versions
PyTorch nightly from PyPI, version `2.7.0.dev20250120+cu126`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @zou3519 @bdhirsh @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,837,752,586
|
[clang-tidy] Add suppression clang-diagnostic-shadow
|
dmpolukhin
|
open
|
[
"fb-exported",
"Stale",
"topic: not user facing"
] | 13
|
NONE
|
Summary:
Reviewed By: varun2784
Differential Revision: D69182465
| true
|
2,837,713,751
|
MPS support `torch.linalg.norm` on complex numbers
|
JackVittori
|
open
|
[
"triaged",
"module: complex",
"enhancement",
"module: linear algebra",
"actionable"
] | 3
|
NONE
|
### 🚀 The feature, motivation and pitch
I am working on a quantum machine learning project with Pennylane and Pytorch on a M3 Pro and I would like to use the GPU to train my models. I would like to be able to use torch.linalg.norm on torch.complex type. Example
```python
import torch
device = torch.device('mps')
state = torch.randn(100, 100, dtype=torch.complex64, device=device)
norm = torch.linalg.norm(state, dim=-1)
RuntimeError: norm ops are not supported for complex yet
```
It does not give any error when it is run with:
```python
import torch
device = torch.device('cpu') #or 'cuda'
state = torch.randn(100, 100, dtype=torch.complex64, device=device)
norm = torch.linalg.norm(state, dim=-1)
```
Thank you in advance for your consideration.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames @jianyuh @pearu @walterddr @xwang233 @Lezcano
| true
|
2,837,663,381
|
Enable pt2e quantization path for arm
|
choudhary-devang
|
open
|
[
"module: cpu",
"triaged",
"open source",
"module: arm",
"release notes: quantization",
"release notes: AO frontend"
] | 56
|
NONE
|
**Title**: Enable PyTorch 2 Export Quantization path for ARM CPUs.
**Description:**
- This PR extends the PyTorch 2 Export Quantization (PT2E Quantization) workflow—originally available only on x86 CPUs—to support ARM platforms. PT2E Quantization is an automated, full-graph quantization solution in PyTorch that improves on Eager Mode Quantization by adding support for functionals and automating the overall process. It is part of the torch.ao module and fully supports quantization when using the compile mode.
**Key Changes:**
- Introduces ARM-specific support by leveraging oneDNN kernels for matmuls and convolution.
- Integrates pre-defined configuration selection to automatically choose the best quantization settings based on the selected quantization method.
**Provides customization options via two flags:**
- **qat_state:** Indicates whether to use Quantization Aware Training (if set to True) or Post Training Quantization (if set to False). The default remains False.
- **dynamic_state:** Selects between dynamic quantization (if True) and static quantization (if False). The default is also set to False.

These options allow users to tailor the quantization process for their specific workload requirements (e.g., using QAT for fine-tuning or PTQ for calibration-based quantization).
Testing and Validation:
The new ARM flow has been thoroughly tested across a range of models with all combinations:
**NLP**: Models such as BERT and T5.
**Vision**: Models like ResNet and ViT.
**Custom Models**: user defined models with various operators.
example script:
```
import torch
import torchvision.models as models
from torch.ao.quantization.quantize_pt2e import prepare_pt2e, convert_pt2e
import torch.ao.quantization.quantizer.arm_inductor_quantizer as armiq
from torch.ao.quantization.quantizer.arm_inductor_quantizer import ArmInductorQuantizer
from torch.profiler import profile, record_function, ProfilerActivity
model_name = "resnet50"
model = models.__dict__[model_name](pretrained=True)
# Set the model to eval mode
model = model.eval()
# Create the data, using the dummy data here as an example
traced_bs = 500
x = torch.randn(traced_bs, 3, 224, 224).contiguous(memory_format=torch.channels_last)
example_inputs = (x,)
with torch.no_grad():
exported_model = torch.export.export_for_training(model, example_inputs).module()
quantizer = armiq.ArmInductorQuantizer()
quantizer.set_global(armiq.get_default_arm_inductor_quantization_config(is_dynamic=False))
prepared_model = prepare_pt2e(exported_model, quantizer)
converted_model = convert_pt2e(prepared_model)
with torch.set_grad_enabled(False):
for _ in range(50):
converted_model(*example_inputs) #Warmup
print("Warmup over")
with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof:
with record_function("model_inference"):
for _ in range(100):
converted_model(*example_inputs)
print(prof.key_averages(group_by_input_shape=True).table(sort_by="self_cpu_time_total"))
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,837,610,676
|
Update addbmm, addmm, addmv and baddbmm description
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 16
|
CONTRIBUTOR
|
Fixes #146611, following #146482
## Test Result

| true
|
2,837,604,376
|
[Dynamo] Support for more binary ops
|
mieshkiwrk
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks",
"module: compile ux"
] | 0
|
NONE
|
### 🐛 Describe the bug
Below example gives graph break `torch._dynamo.exc.Unsupported: builtin: and_ [<class 'torch._dynamo.variables.user_defined.UserDefinedObjectVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>] False`
Looks like intended for now and it can be supported with little effort by adding entry for `fns` in `BuiltinVariable::_binops` function at `torch/_dynamo/variables/builtin.py`
`operator.and_: (["__and__", "__rand__", "__iand__"], operator.and_),` and it's alternatives for other binary operators.
There's such comment added 2 years ago - is it still valid blocking dynamo from supporting other binary operators?
Maybe it can support other cases and keep it disabled for dynamic shapes only?
```
# NB: The follow binary operators are not supported for now, since the
# corresponding magic methods aren't defined on SymInt / SymFloat:
# operator.matmul
# divmod
# operator.and_
# operator.or_
# operator.xor
```
```python
import torch
class DummyClass:
def __init__(self, value):
self.value = value
def __and__(self, other):
return self.value & int(other)
def test_fn():
v = DummyClass(1)
return (v & 1)
compiled_test_fn = torch.compile(test_fn)
compiled_test_fn()
```
### Versions
PT 2.6
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,837,602,919
|
Make GetCPUAllocatorMaybePinned to be Device-Agnostic
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146687
----
- Keep cuda first to perserve BC
- Remove cuda first if it is possible to have only one accelerator at a time in the future
| true
|
2,837,600,545
|
Unexpected specialization during estimate_runtime
|
laithsakka
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
idk if the title is clear enough probably not but this is issue is about a specialization that happens during forward backward partition.
we have the following joint graph:
```python
def forward(...):
# bla bla bla
# the important part
mul_24: "Sym(s2*s9)" = primals_3 * primals_10
view: "bf16[s2*s9, s0][s0, 1]cuda:0" = torch.ops.aten.view.default(convert_element_type_4, [mul_24, primals_5]); convert_element_type_4 = None
permute: "bf16[s0, s15][1, s0]cuda:0" = torch.ops.aten.permute.default(primals_17, [1, 0]); primals_17 = None
addmm: "bf16[s2*s9, s15][s15, 1]cuda:0" = torch.ops.aten.addmm.default(primals_18, view, permute); primals_18 = None
view_1: "bf16[s2, s9, s15][s15*s9, s15, 1]cuda:0" = torch.ops.aten.view.default(addmm, [primals_3, primals_10, primals_16]);
# bla bla bla
```
Note everything is dynamic in the graph
full graph
https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/aps-dynTrue-bda4f7b89b/attempt_0/version_0/rank_0/-_29_3_1/aot_joint_graph_640.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
After partitioning what happens is that s0 get specialized to 265. note that hint(s1) = 265 hint(s2) = 128, hint(s9) = 56, s2*s9 = 7168.
ok so we get the following:
```python
def forward(...):
# bla bla bla
mul_24: "Sym(s2*s9)" = primals_3 * primals_10
view: "bf16[s2*s9, 256][256, 1]cuda:0" = torch.ops.aten.view.default(convert_element_type_4, [mul_24, primals_5]); convert_element_type_4 = None
permute: "bf16[256, 256][1, 256]cuda:0" = torch.ops.aten.permute.default(primals_17, [1, 0]); primals_17 = None
addmm: "bf16[s2*s9, 256][256, 1]cuda:0" = torch.ops.aten.addmm.default(primals_18, view, permute); view = None
view_1: "bf16[s2, s9, 256][256*s9, 256, 1]cuda:0" = torch.ops.aten.view.default(addmm, [primals_3, primals_10, primals_16]); addmm = None
...
# bla bla bla
```
Why did it get specialized?
at some point we call is_continuous .
the function is (with some logs that I added)
we specialize at if guard_size_oblivious(y != expected_stride):
```python
# This function is equivalent to compute_contiguous() from TensorImpl.cpp
def is_contiguous(a: TensorLikeType) -> bool:
"""
Tests whether a tensor is contiguous or not.
Tensors are contiguous when they have no elements,
one element, or when they have "nested" strides.
"""
from torch.fx.experimental.symbolic_shapes import guard_size_oblivious
logger.info(f"inside is_contiguous with size: {a.size()} stride: {a.stride()}")
if guard_size_oblivious(a.numel() < 2):
return True
expected_stride = 1
for x, y in reversed(tuple(zip(a.shape, a.stride()))):
# Skips checking strides when a dimension has length 1
if guard_size_oblivious(x == 1):
continue
logger.info(f"checking the following {y} != {expected_stride}")
if guard_size_oblivious(y != expected_stride):
return False
expected_stride = expected_stride * x
return True
```
the logs printed are the following:
```
inside is_contiguous with size: torch.Size([7168, 256]) stride: (s0, 1)
checking the following s0 != 256
```
the big question is why the tensor passed to is_contiguous have size [7168, 256]) and not [s2*s9,s0] this sounds problematic because there was no specialization for s0 before that point, or s2 or s9, in fact s2*s9 is still as as in the splitted graphs!?
the full stack trace where is_continuous get called is
```
File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 451, in aot_dispatch_autograd
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] fw_module, bw_module = aot_config.partition_fn(
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_inductor/compile_fx.py", line 1818, in partition_fn
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return min_cut_rematerialization_partition(
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_functorch/partitioners.py", line 1847, in min_cut_rematerialization_partition
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] saved_values = choose_saved_values_set(
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_functorch/partitioners.py", line 1597, in choose_saved_values_set
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] runtimes_banned_nodes = [
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_functorch/partitioners.py", line 1598, in <listcomp>
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] estimate_runtime(node) for node in all_recomputable_banned_nodes
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_functorch/partitioners.py", line 1482, in estimate_runtime
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] node.target(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_ops.py", line 758, in __call__
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self._op(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/utils/flop_counter.py", line 790, in __torch_dispatch__
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] out = func(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_ops.py", line 758, in __call__
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self._op(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/utils/_stats.py", line 26, in wrapper
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return fn(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1267, in __torch_dispatch__
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self.dispatch(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1808, in dispatch
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self._cached_dispatch_impl(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1378, in _cached_dispatch_impl
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] output = self._dispatch_impl(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 2282, in _dispatch_impl
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] decomposition_table[func](*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_prims_common/wrappers.py", line 310, in _fn
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] result = fn(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_decomp/decompositions.py", line 84, in inner
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs))
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/utils/_pytree.py", line 998, in tree_map
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return treespec.unflatten(map(func, *flat_args))
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/utils/_pytree.py", line 844, in unflatten
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] leaves = list(leaves)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_decomp/decompositions.py", line 74, in increase_prec
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return x.to(computation_dtype)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/utils/_stats.py", line 26, in wrapper
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return fn(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1267, in __torch_dispatch__
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self.dispatch(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1808, in dispatch
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self._cached_dispatch_impl(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1378, in _cached_dispatch_impl
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] output = self._dispatch_impl(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 2287, in _dispatch_impl
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] r = func.decompose(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_ops.py", line 801, in decompose
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self._op_dk(dk, *args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/utils/_stats.py", line 26, in wrapper
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return fn(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1267, in __torch_dispatch__
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self.dispatch(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1808, in dispatch
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self._cached_dispatch_impl(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1378, in _cached_dispatch_impl
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] output = self._dispatch_impl(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 2282, in _dispatch_impl
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] decomposition_table[func](*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_prims_common/wrappers.py", line 310, in _fn
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] result = fn(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_decomp/decompositions.py", line 2123, in _to_copy
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] x_tensor = torch._prims.convert_element_type(x_tensor, dtype)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_ops.py", line 758, in __call__
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self._op(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/utils/_stats.py", line 26, in wrapper
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return fn(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1267, in __torch_dispatch__
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self.dispatch(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1808, in dispatch
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self._cached_dispatch_impl(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 1378, in _cached_dispatch_impl
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] output = self._dispatch_impl(func, types, args, kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_subclasses/fake_tensor.py", line 2304, in _dispatch_impl
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] func.prim_meta_impl(*args, **kwargs)
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_prims/__init__.py", line 1909, in _convert_element_type_meta
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] if torch._prims_common.is_non_overlapping_and_dense(a):
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_prims_common/__init__.py", line 397, in is_non_overlapping_and_dense
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] if is_contiguous(a) or is_channels_last_contiguous(a):
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_prims_common/__init__.py", line 283, in is_contiguous
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] if guard_size_oblivious(y != expected_stride):
```
Finally I added printing for calls op
```
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_ops.py", line 758, in __call__
[trainer9|1]:[rank9]:I0203 22:44:35.461931 4546 torch/fx/experimental/symbolic_shapes.py:6354] [28/0_1] return self._op(*args, **kwargs)
```
and i got this:
```
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,470: INFO: calling op<built-in method of PyCapsule object at 0x7f50c76aaf40>, args=(), kwargs={'size': [s0, s0], 'stride': [1, s0], 'dtype': torch.bfloat16, 'layout': None, 'pin_memory': False, 'device': device(type='meta')}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,470: INFO: calling op<built-in method of PyCapsule object at 0x7f50c76aaf40>, args=(), kwargs={'size': [s0, s0], 'stride': [1, s0], 'dtype': torch.bfloat16, 'layout': None, 'pin_memory': False, 'device': device(type='meta')}
[trainer0|0]:[rank0]:I0207 00:09:56.500086 4484 torch/_functorch/partitioners.py:1120] [29/3_1] used above/below fusible mul_2:(30) -> 60 -> mul_51:(63)
[trainer0|0]:[rank0]:I0207 00:09:56.500370 4484 torch/_functorch/partitioners.py:1120] [29/3_1] used above/below fusible mul_24:(46) -> 49 -> view_2:(51)
[trainer0|0]:[rank0]:I0207 00:09:56.500485 4484 torch/_functorch/partitioners.py:1120] [29/3_1] used above/below fusible mul_24:(46) -> 49 -> add_49:(52)
[trainer0|0]:[rank0]:I0207 00:09:56.500596 4484 torch/_functorch/partitioners.py:1120] [29/3_1] used above/below fusible mul_24:(46) -> 49 -> view_3:(57)
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,530: INFO: calling op<built-in method of PyCapsule object at 0x7f50c76aaf40>, args=(), kwargs={'size': [7168, 256], 'stride': [s0, 1], 'dtype': torch.bfloat16, 'layout': torch.strided, 'pin_memory': False, 'device': device(type='meta')}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,530: INFO: calling op<built-in method of PyCapsule object at 0x7f50c76aaf40>, args=(), kwargs={'size': [7168, 256], 'stride': [s0, 1], 'dtype': torch.bfloat16, 'layout': torch.strided, 'pin_memory': False, 'device': device(type='meta')}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,530: INFO: calling op<built-in method of PyCapsule object at 0x7f50c76aaf40>, args=(), kwargs={'size': [7168, 256], 'stride': [s0, 1], 'dtype': torch.bfloat16, 'layout': torch.strided, 'pin_memory': False, 'device': device(type='meta')}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,532: INFO: calling op<built-in method of PyCapsule object at 0x7f50c76aaf40>, args=(), kwargs={'size': [256, 256], 'stride': [1, s0], 'dtype': torch.bfloat16, 'layout': torch.strided, 'pin_memory': False, 'device': device(type='meta')}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,532: INFO: calling op<built-in method of PyCapsule object at 0x7f50c76aaf40>, args=(), kwargs={'size': [256, 256], 'stride': [1, s0], 'dtype': torch.bfloat16, 'layout': torch.strided, 'pin_memory': False, 'device': device(type='meta')}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,532: INFO: calling op<built-in method of PyCapsule object at 0x7f50c76aaf40>, args=(), kwargs={'size': [256, 256], 'stride': [1, s0], 'dtype': torch.bfloat16, 'layout': torch.strided, 'pin_memory': False, 'device': device(type='meta')}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,534: INFO: calling op<built-in method of PyCapsule object at 0x7f50c6fa5b00>, args=(FakeTensor(..., device='cuda:0', size=(256,), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(7168, 256), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(256, 256), dtype=torch.bfloat16)), kwargs={}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,534: INFO: calling op<built-in method of PyCapsule object at 0x7f50c6fa5b00>, args=(FakeTensor(..., device='cuda:0', size=(256,), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(7168, 256), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(256, 256), dtype=torch.bfloat16)), kwargs={}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,534: INFO: calling op<built-in method of PyCapsule object at 0x7f50c6fa5b00>, args=(FakeTensor(..., device='cuda:0', size=(256,), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(7168, 256), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(256, 256), dtype=torch.bfloat16)), kwargs={}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,534: INFO: calling op<built-in method of PyCapsule object at 0x7f50c6fa5b00>, args=(FakeTensor(..., device='cuda:0', size=(256,), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(7168, 256), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(256, 256), dtype=torch.bfloat16)), kwargs={}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,534: INFO: calling op<built-in method of PyCapsule object at 0x7f50c6fa5b00>, args=(FakeTensor(..., device='cuda:0', size=(256,), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(7168, 256), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(256, 256), dtype=torch.bfloat16)), kwargs={}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,534: INFO: calling op<built-in method of PyCapsule object at 0x7f50c6fa5b00>, args=(FakeTensor(..., device='cuda:0', size=(256,), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(7168, 256), dtype=torch.bfloat16), FakeTensor(..., device='cuda:0', size=(256, 256), dtype=torch.bfloat16)), kwargs={}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,535: INFO: calling op<built-in method of PyCapsule object at 0x7f50c70c3510>, args=(FakeTensor(..., device='cuda:0', size=(7168, 256), dtype=torch.bfloat16), torch.float32), kwargs={}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,535: INFO: calling op<built-in method of PyCapsule object at 0x7f50c70c3510>, args=(FakeTensor(..., device='cuda:0', size=(7168, 256), dtype=torch.bfloat16), torch.float32), kwargs={}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,535: INFO: calling op<built-in method of PyCapsule object at 0x7f50c70c3510>, args=(FakeTensor(..., device='cuda:0', size=(7168, 256), dtype=torch.bfloat16), torch.float32), kwargs={}
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,535: INFO: inside is_contiguous with size: torch.Size([7168, 256]) stride: (s0, 1)
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,535: INFO: inside is_contiguous with size: torch.Size([7168, 256]) stride: (s0, 1)
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,535: INFO: inside is_contiguous with size: torch.Size([7168, 256]) stride: (s0, 1)
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,536: INFO: checking the following 1 != 1
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,536: INFO: checking the following 1 != 1
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,536: INFO: checking the following 1 != 1
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,536: INFO: checking the following s0 != 256
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,536: INFO: checking the following s0 != 256
[trainer0|0]:strobelight_pytorch_profiler: 2025-02-07 00:09:56,536: INFO: checking the following s0 != 256
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @zou3519 @bdhirsh @yf225 @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,837,596,471
|
Fix integer overflow in (fake) quantization
|
Flamefire
|
closed
|
[
"module: cpu",
"release notes: quantization"
] | 1
|
COLLABORATOR
|
The `static_cast<int64_t>` can overflow for large float values and/or a small scale (e.g. 9.2e14 & 1e-4)
Fix a similar issue in the mask calculation where `std::lrint` is used which may convert to a 32 bit float returning an implementation defined value on overflow.
Stay in float mode using `std::round` and `fmin/fmax` to avoid this.
Fixes #111471
I actually fixed the CUDA code first and copied that.
I didn't touch the code duplication which can likely be removed by using a fitting `AT_DISPATCH` but for some reason the zero_point is `int32_t` while the limits are `int64_t` which to me doesn't make much sense and the actual type could always be used.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
See https://github.com/pytorch/pytorch/pull/129127
| true
|
2,837,575,394
|
Optimize LRScheduler docs
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"release notes: optim"
] | 10
|
CONTRIBUTOR
|
Fixes #120735
Add more description about [`LRScheduler`](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LRScheduler.html#torch.optim.lr_scheduler.LRScheduler)
## Changes
> 1. What the constructor's last_epoch argument is.
And does epoch start from 0 or 1?
Also there are two terms - "epoch", "step" - are they the same?
`last_epoch` explained via Args description, the difference might be clear after compare with `step` method description.
> 2. That the constructor [relies on/creates the 'initial_lr'](https://github.com/pytorch/pytorch/blob/v2.2.1/torch/optim/lr_scheduler.py#L51) property on the .optimizer.
(By the way, is the Optimizer class up with it?)
`initial_lr` will be set when init LRScheduler, use optimizer lr value, which is initialized when create a optimizer. But these are inner implementation, users doesn't need care about.
> 3. That .get_last_lr() and .get_lr() are totally different methods despite of naming.
Wait, there are also ._get_closed_form_lr() methods, hm...
Add description in `get_last_lr` and `get_lr`, but private method `_get_closed_form_lr` should not expose to users in docs.
> 4. That the constructor [does .step()](https://github.com/pytorch/pytorch/blob/v2.2.1/torch/optim/lr_scheduler.py#L85-L91) itself via ._initial_step() method.
> 5. Which arguments the .step() method has.
Or is it (the epoch argument) [deprecated](https://github.com/pytorch/pytorch/blob/v2.2.1/torch/optim/lr_scheduler.py#L156)?
Add deprecate `epoch` in `step` method doc
> 6. What does .step() do?
Does it modify (in some way) the .optimizer? (See also p.2.)
Update `step` doc
> 7.Are the .last_epoch, .base_lrs attributes public?
(Don't know, maybe it is not accepted to publish such information.)
(For .last_epoch, see also p.1a.)
These params seems handled by `LRScheduler` itself and not give example for use in [here](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate), I assume users can omit these params when use `LRScheduler`.
## Test Result
### Before

### After


cc @janeyx99
| true
|
2,837,564,535
|
Web Page do not match the original documentation
|
ZhaoqiongZ
|
closed
|
[
"module: docs",
"triaged",
"module: xpu"
] | 2
|
CONTRIBUTOR
|
### 📚 The doc issue
For the documentation Getting Started on Intel GPU at this link https://pytorch.org/docs/stable/notes/get_start_xpu.html, the content hasn't been updated to the latest version in the release/2.6 branch https://github.com/pytorch/pytorch/blob/release/2.6/docs/source/notes/get_start_xpu.rst . The latest changes include fixing the format and replacing the preview version with the release version. Please help update the web page.
### Suggest a potential alternative/fix
_No response_
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,837,560,463
|
How to get last layer hidden state of transformer model while convert model to onnx format?
|
Jianshu1only
|
closed
|
[
"module: onnx",
"triaged"
] | 1
|
NONE
|
I am currently working with a model that has been exported to the ONNX format. For my project, I need to extract the last layer hidden states during inference. However, I couldn’t find any documentation or example that explains how to achieve this using an ONNX-exported model.
Whether the ONNX format retains the capability to extract the last layer hidden states?
Thanks!
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,837,499,722
|
[MPS] lu unpack
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 7
|
COLLABORATOR
|
Implements lu unpack function on MPS. Haven't added new tests because they are covered by removing the lu_unpack from UNIMPLEMENTED_XFAILLIST in test_mps with `test_output_match` function
| true
|
2,837,489,208
|
Need guidance to modify number of cores/threads settings when inferencing under transformers&pytorch framework
|
luentong
|
closed
|
[
"module: cpu",
"triaged",
"module: arm"
] | 6
|
NONE
|
In my project I need to run a LlavaLlamaForCausalLM instance inference under transformers framework, which is basically a subclass of nn.Module. I want to modify number of cores/threads settings when doing inferencing, in a machine with 160-core cpus and no gpu. Which folder in the source code of transformers/pytorch is responsible for this? Can anyone point to a specific location? Thanks a lot
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01 @frank-wei
| true
|
2,837,429,944
|
Torch 2.6.0 cu126 is missing several dependencies in the METADATA-file
|
anates
|
closed
|
[
"high priority",
"oncall: releng",
"triaged",
"module: regression"
] | 7
|
NONE
|
### 🐛 Describe the bug
When upgrading from torch-2.6.0+cu124 to torch-2.6.0+cu126 on unix, several dependencies are lost in the METADATA-file:
For cu124 the following packages exist:
```
Requires-Dist: nvidia-cuda-nvrtc-cu12 (==12.4.127) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cuda-runtime-cu12 (==12.4.127) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cuda-cupti-cu12 (==12.4.127) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cudnn-cu12 (==9.1.0.70) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cublas-cu12 (==12.4.5.8) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cufft-cu12 (==11.2.1.3) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-curand-cu12 (==10.3.5.147) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cusolver-cu12 (==11.6.1.9) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cusparse-cu12 (==12.3.1.170) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cusparselt-cu12 (==0.6.2) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-nccl-cu12 (==2.21.5) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-nvtx-cu12 (==12.4.127) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-nvjitlink-cu12 (==12.4.127) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: triton (==3.2.0) ; platform_system == "Linux" and platform_machine == "x86_64"
```
However, for cu126 these are no longer available in the unix-builds, only in the windows-based builds. This leads to issues such as https://github.com/python-poetry/poetry/issues/10152#issue-2834951846
### Versions
Not relevant for this bug/issue
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim
| true
|
2,837,367,604
|
[dynamo][not ready] polyfill infra for classes
|
anijain2305
|
open
|
[
"Stale",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146678
* #146737
* #146677
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,837,367,528
|
[dynamo][user-defined] User class.__new__ instead of special casing
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146819
* #146737
* __->__ #146677
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,837,299,044
|
`out` should exist as an instance variable out of the func itself
|
ILCSFNO
|
closed
|
[
"triaged",
"module: random",
"module: python frontend"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Parameter `out` is widely used in Pytorch, e.g. `torch.randint()`, `torch.arange()`, `torch.fft.rfftfreq()`, `torch.quantile()`, `torch.ormqr()`, ...
But for its usage, `out` serves as the return obj of the exact funcs, which means that `out` should exist as an instance variable out of the func.
Take `torch.randint()` as an example, below 2 situations can running well now:
```python
# Use `out` as something which not exist as an instance variable in the global scope
import torch
randint_result = torch.randint(0, 10, (100, 100), out = torch.empty(100, 100))
```
```python
# Use `out` as something which exist as an instance variable in the global scope
import torch
out = torch.empty(100, 100)
randint_result = torch.randint(0, 10, (100, 100), out = out)
print(out)
## Output:
# tensor([[4., 2., 7., ..., 6., 4., 5.],
# [5., 7., 6., ..., 9., 0., 0.],
# [4., 7., 0., ..., 1., 5., 3.],
# ...,
# [4., 2., 3., ..., 6., 7., 8.],
# [1., 7., 3., ..., 8., 4., 8.],
# [6., 5., 9., ..., 2., 8., 2.]])
```
The second one, which transmit existent instance variable to `out`, is the right use yet.
But the first one, which transmit non-existent instance variable to `out`, is expected to raise error/warning, while it runs well too!
### Suggestions
* Check whether the parameter is existed as an instance variable out of the func.
* Raise Warning/Error just as shown below.
I have tried to check local variables with global variables using codes below, expecting useful for the issue:
**_Note that codes below use the global variables for check, maybe sometimes the instance variable out of the func is also local args in some funcs!_**
```python
import inspect
import warnings
def verify_variable_names(func, local_vars, global_vars, check_vars = None):
## local vars
sig = inspect.signature(func)
local_names = list(sig.parameters.keys())
local_values = [local_vars[name] for name in local_names]
## global vars & match
external_var_names = {}
for local_name, local_value in zip(local_names, local_values):
if check_vars is not None and local_name not in check_vars:
continue
for global_name, global_value in global_vars.items():
if id(global_value) == id(local_value):
external_var_names[local_name] = global_name
break
if local_name not in external_var_names:
warnings.warn(f"{local_name} in {func.__name__} not found as a valid variable in the global scope.") # if select to raise warning
# raise RuntimeError(f"{local_name} in {func.__name__} not found as a valid variable in the global scope.") # if select to raise error
# print(f"external_var_names: {external_var_names}")
def my_func(param1, param2):
## globals()[inspect.currentframe().f_code.co_name] refers to the current function call: my_func
verify_variable_names(globals()[inspect.currentframe().f_code.co_name], locals(), globals())
## normal function body
# ...
## example usage
a = 10
aa = 20.0 # as an interference term of `b` with the same value
b = 20.0
my_func(a, b) # success
my_func(a, 20.0) # raise warning/error
```
For the exact func `torch.randint()`, there is also an example for one of overloads, which can distinguish between existent instance variable with non-existent instance variable:
```python
import builtins
from typing import (Any, Callable, ContextManager, Iterator, List, Literal, NamedTuple, Optional, overload, Sequence, Tuple, TypeVar, Union,)
import torch
from torch import contiguous_format, Generator, inf, memory_format, strided, SymInt, Tensor
from torch.types import (_bool, _complex, _device, _dtype, _float, _int, _layout, _qscheme, _size, Device, Number,)
from torch._prims_common import DeviceLikeType
import inspect
import warnings
def verify_variable_names(func, local_vars, global_vars, check_vars = None):
## local vars
sig = inspect.signature(func)
local_names = list(sig.parameters.keys())
local_values = [local_vars[name] for name in local_names]
## global vars & match
external_var_names = {}
for local_name, local_value in zip(local_names, local_values):
if check_vars is not None and local_name not in check_vars:
continue
for global_name, global_value in global_vars.items():
if id(global_value) == id(local_value):
external_var_names[local_name] = global_name
break
if local_name not in external_var_names:
warnings.warn(f"{local_name} in {func.__name__} not found as a valid variable in the global scope.") # select to raise warning
# raise RuntimeError(f"{local_name} in {func.__name__} not found as a valid variable in the global scope.") # select to raise error
# print(f"external_var_names: {external_var_names}")
def randint(low: Union[_int, SymInt], high: Union[_int, SymInt], size: Sequence[Union[_int, SymInt]], *, out: Optional[Tensor] = None, dtype: Optional[_dtype] = None, layout: Optional[_layout] = None, device: Optional[Optional[DeviceLikeType]] = None, pin_memory: Optional[_bool] = False, requires_grad: Optional[_bool] = False) -> Tensor:
verify_variable_names(globals()[inspect.currentframe().f_code.co_name], locals(), globals(), check_vars = ['out'])
return torch.randint(low, high, size, out = out, dtype = dtype, layout = layout, device = device, pin_memory = pin_memory, requires_grad = requires_grad)
out = torch.empty(100, 100)
randint_result1 = randint(0, 10, (100, 100), out = out) # Success
randint_result2 = randint(0, 10, (100, 100), out = torch.empty(100, 100)) # Raise warning
```
It only needs to add `verify_variable_names()` func in front of the func body for use.
Thanks for notice!
### Versions
pytorch==2.5.0
torchvision==0.20.0
torchaudio==2.5.0
pytorch-cuda=12.1
cc @pbelevich @albanD
| true
|
2,837,264,708
|
[ROCm] Move ROCm unstable MI300 jobs back to stable
|
amdfaa
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"topic: not user facing",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 5
|
CONTRIBUTOR
|
Fixes #145790
Needs #145504 to be merged first to resolve an artifact uploading issue with MI300 runners.
This PR moves rocm unstable MI300 back to stable. The change to unstable was introduced through this [PR](https://github.com/pytorch/pytorch/pull/145790). This was because the MI300s were failing with a [docker daemon](https://github.com/pytorch/pytorch/actions/runs/13015957622/job/36306779536) issue which has been resolved.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @ZainRizvi
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,837,220,128
|
[ONNX] Support while and scan HOPs
|
justinchuby
|
open
|
[
"module: onnx",
"triaged"
] | 0
|
COLLABORATOR
|
cc @xadupre @titaiwangms @shubhambhokare1 @gramalingam
| true
|
2,837,217,502
|
Dynamo should handle `class.method_descriptor(instance, *args, **kwargs)` generically
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
See https://github.com/pytorch/pytorch/pull/146587#discussion_r1944919312
This is an uncommon (but legitimate) way to invoke a method. For example:
```py
# The str case that was fixed by #146587
s = "foobar"
str.isalnum(s)
# example with torch.Tensor (I haven't tested this)
x = torch.randn(3)
torch.Tensor.sin(x)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,837,163,817
|
inconsistency in torch.nn.Tanh on CPU and GPU
|
alionapi
|
closed
|
[
"module: nn",
"module: cpu",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
Inconsistency in `torch.nn.Tahn` CPU and GPU:
```
import torch
self = torch.tensor([[[[0.0396193 + 1.5585054j, 0.5038033 - 1.3928472j],
[1.1071061 + 1.0378395j, 0.0687875 - 0.1666800j]],
[[-0.9338380 - 1.0284885j, 0.2591278 + 0.5482853j],
[0.5984055 - 0.5939694j, 0.6268274 - 1.2067362j]]]], dtype=torch.complex64)
self_cuda = self.cuda()
module = torch.nn.Tanh()
module.to(torch.complex64)
result_cpu = module(self)
module_cuda = torch.nn.Tanh()
module_cuda.to(torch.complex64)
result_gpu = module_cuda(self_cuda)
print("CPU result:\n", result_cpu)
print("GPU result:\n", result_gpu)
inconsistent = not torch.allclose(result_cpu, result_gpu.cpu(), atol=1e-05, rtol=1e-06)
print(f"inconsistency with atol=1e-05 and rtol=1e-06: {inconsistent}")
```
Output:
```
CPU result:
tensor([[[[23.0376+7.1386j, 1.9309-0.5668j],
[ 1.0903+0.2110j, 0.0706-0.1674j]],
[[-1.1099-0.3106j, 0.3399+0.5581j],
[ 0.6900-0.4256j, 1.4016-0.5797j]]]])
GPU result:
tensor([[[[23.0373+7.1386j, 1.9309-0.5668j],
[ 1.0903+0.2110j, 0.0706-0.1674j]],
[[-1.1099-0.3106j, 0.3399+0.5581j],
[ 0.6900-0.4256j, 1.4016-0.5797j]]]], device='cuda:0')
inconsistency with atol=1e-05 and rtol=1e-06: True
```
### Versions
(executed on Google Colab)
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.5.3.2
[pip3] nvidia-cuda-cupti-cu12==12.5.82
[pip3] nvidia-cuda-nvrtc-cu12==12.5.82
[pip3] nvidia-cuda-runtime-cu12==12.5.82
[pip3] nvidia-cudnn-cu12==9.3.0.75
[pip3] nvidia-cufft-cu12==11.2.3.61
[pip3] nvidia-curand-cu12==10.3.6.82
[pip3] nvidia-cusolver-cu12==11.6.3.83
[pip3] nvidia-cusparse-cu12==12.5.1.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] optree==0.14.0
[pip3] pynvjitlink-cu12==0.5.0
[pip3] torch==2.5.1+cu124
[pip3] torchaudio==2.5.1+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,837,147,286
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu"
] | 13
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [80c375570e2b6b2989a8610da1871f8a50dfddc7](https://github.com/intel/torch-xpu-ops/commit/80c375570e2b6b2989a8610da1871f8a50dfddc7), includes:
- Aten operator coverage improvement
- SYCL kernel optimization
- Nested Tensor OPs support
| true
|
2,837,114,502
|
[Dynamo] Eliminate single `Self` import from typing_extensions
|
zeshengzong
|
closed
|
[
"open source",
"topic: not user facing",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
Replace `from typing_extensions import Self` with `from typing import`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,837,061,751
|
Optimize inductor `Self` typing
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 9
|
CONTRIBUTOR
|
Replace method return type with `Self` typing
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,837,057,069
|
Make sure cutlass kernel .cu file has configuration name and nvcc compile command
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
I think its good to have everything in the .cu file. Especially the nvcc compile command.
Technically, the configuration name can be found in the template already. So let me know if you think its not needed.
Differential Revision: D69281295
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,837,037,220
|
Different gradient calculation for tensor.min() vs tensor.min(dim=0)
|
tuero
|
closed
|
[
"module: autograd",
"triaged",
"module: derivatives"
] | 2
|
NONE
|
### 🐛 Describe the bug
In a previous issue [here](https://github.com/pytorch/pytorch/issues/35699), it was noted that `tensor.min()` and `tensor.min(dim=0)` behave differently. The following example was given as the behaviour at the time:
```python
import torch
a = torch.tensor([0.1, 0.3, 0.1], dtype=torch.float32, requires_grad = True)
a_cp = torch.tensor([0.1, 0.3, 0.1], dtype=torch.float32, requires_grad = True)
b = a.min()
b.backward()
a.grad # Output is tensor([1., 0., 1.])
c, d = a_cp.min(dim=0)
c.backward()
a_cp.grad # Output is tensor([0., 0., 1.])
```
It was noted that the behaviour of `tensor.min(dim=0)` is the expected behaviour, and a change was made. However, I get the following behaviour now:
```python
import torch
a = torch.tensor([0.1, 0.3, 0.1], dtype=torch.float32, requires_grad = True)
a_cp = torch.tensor([0.1, 0.3, 0.1], dtype=torch.float32, requires_grad = True)
b = a.min()
b.backward()
a.grad # Output is tensor([0.5000, 0.0000, 0.5000])
c, d = a_cp.min(dim=0)
c.backward()
a_cp.grad # Output is tensor([1., 0., 0.])
```
Should the gradient for `tensor.min()` not mimic the behaviour of `tensor.min(dim=0)`? If not, what is the reasoning behind why the gradient would flow back to multiple elements in `tensor.min()` but not `tensor.min(dim=0)`?
Thanks!
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.4.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.8 (9ubuntu1~24.04)
CMake version: version 3.30.3
Libc version: glibc-2.39
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7
/usr/local/cuda-12.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-7960X CPU @ 2.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 4
CPU(s) scaling MHz: 36%
CPU max MHz: 4400.0000
CPU min MHz: 1200.0000
BogoMIPS: 5599.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault cat_l3 cdp_l3 pti ssbd mba ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_pkg_req vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 22 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.4.2.65
[pip3] nvidia-cuda-cupti-cu12==12.4.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.99
[pip3] nvidia-cuda-runtime-cu12==12.4.99
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.0.44
[pip3] nvidia-curand-cu12==10.3.5.119
[pip3] nvidia-cusolver-cu12==11.6.0.99
[pip3] nvidia-cusparse-cu12==12.3.0.142
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.99
[pip3] nvidia-nvtx-cu12==12.4.99
[pip3] torch==2.4.1+cu124
[pip3] torchaudio==2.4.1+cu124
[pip3] torchvision==0.19.1+cu124
[pip3] torchviz==0.0.3
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.2.65 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.99 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.99 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.99 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.0.44 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.119 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.0.99 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.0.142 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.99 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.99 pypi_0 pypi
[conda] torch 2.4.1+cu124 pypi_0 pypi
[conda] torchaudio 2.4.1+cu124 pypi_0 pypi
[conda] torchvision 0.19.1+cu124 pypi_0 pypi
[conda] torchviz 0.0.3 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,837,029,646
|
Enable Windows tests
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,837,029,045
|
Fix linter F821 error
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,837,020,732
|
[Docs] Fix description of `input` in `torch.addbmm()`
|
shink
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: docs"
] | 20
|
CONTRIBUTOR
|
Fixes #146613
| true
|
2,836,969,359
|
Exponentially slow compile time with repeated logsumexp when gradient is enabled
|
hkchengrex
|
closed
|
[
"oncall: pt2",
"module: compile-time"
] | 3
|
NONE
|
### 🐛 Describe the bug
I am trying to compile a simplified Sinkhorn algorithm. The core function is
```python
@torch.compile
def sinkhorn(Mr, u, v):
for _ in range(num_iters):
v = torch.logsumexp(Mr + u.unsqueeze(1), 0)
u = torch.logsumexp(Mr + v.unsqueeze(0), 1)
return u, v
```
With a **static** `num_iters`, the time it takes to compile gets exponentially longer with `num_iters`.
Minimal reproducer:
```python
import time
import torch
@torch.compile
def sinkhorn_2(Mr, u, v):
for _ in range(2):
v = torch.logsumexp(Mr + u.unsqueeze(1), 0)
u = torch.logsumexp(Mr + v.unsqueeze(0), 1)
return u, v
@torch.compile
def sinkhorn_3(Mr, u, v):
for _ in range(3):
v = torch.logsumexp(Mr + u.unsqueeze(1), 0)
u = torch.logsumexp(Mr + v.unsqueeze(0), 1)
return u, v
@torch.compile
def sinkhorn_4(Mr, u, v):
for _ in range(4):
v = torch.logsumexp(Mr + u.unsqueeze(1), 0)
u = torch.logsumexp(Mr + v.unsqueeze(0), 1)
return u, v
@torch.compile
def sinkhorn_5(Mr, u, v):
for _ in range(5):
v = torch.logsumexp(Mr + u.unsqueeze(1), 0)
u = torch.logsumexp(Mr + v.unsqueeze(0), 1)
return u, v
@torch.compile
def sinkhorn_6(Mr, u, v):
for _ in range(6):
v = torch.logsumexp(Mr + u.unsqueeze(1), 0)
u = torch.logsumexp(Mr + v.unsqueeze(0), 1)
return u, v
M = torch.randn(256, 256, device='cuda', requires_grad=True)
u = torch.zeros(256, device=M.device, dtype=M.dtype)
v = torch.zeros(256, device=M.device, dtype=M.dtype)
sinkhorn_fn = [sinkhorn_2, sinkhorn_3, sinkhorn_4, sinkhorn_5, sinkhorn_6]
for i, f in enumerate(sinkhorn_fn):
start_time = time.time()
f(M, u, v)
torch.cuda.synchronize()
print(f'Iteration {i} took {time.time() - start_time:.4f} seconds')
```
Output:
```
Iteration 0 took 1.1647 seconds
Iteration 1 took 0.3771 seconds
Iteration 2 took 5.8516 seconds
Iteration 3 took 141.5683 seconds
```
The one with 6 iterations did not finish within 10 minutes.
This issue **does not occur** if I do any one of the following:
1. Set `requires_grad` of `M` to False
2. Disable torch.compile
3. Use `torch.sum` rather than `torch.logsumexp`
4. Bring `+u` and `+v` out of `logsumexp`, i.e., Replace `torch.logsumexp(Mr + u.unsqueeze(1), 0)` with `torch.logsumexp(Mr, 0) + u.unsqueeze(1)`
5. Remove either one of the updates for `u` or `v`
6. Remove `Mr`
### Error logs
[tlparse.zip](https://github.com/user-attachments/files/18699450/tlparse.zip) is generated for the code below (5 iterations)
```python
import torch
@torch.compile
def sinkhorn(Mr, u, v):
for _ in range(5):
v = torch.logsumexp(Mr + u, 0)
u = torch.logsumexp(Mr + v, 1)
return u, v
M = torch.randn(256, 256, device='cuda', requires_grad=True)
u = torch.zeros(256, device=M.device, dtype=M.dtype)
v = torch.zeros(256, device=M.device, dtype=M.dtype)
sinkhorn(M, u, v)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L40S
GPU 1: NVIDIA L40S
GPU 2: NVIDIA L40S
GPU 3: NVIDIA L40S
GPU 4: NVIDIA L40S
GPU 5: NVIDIA L40S
GPU 6: NVIDIA L40S
GPU 7: NVIDIA L40S
Nvidia driver version: 550.90.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9554 64-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3762.9880
CPU min MHz: 1500.0000
BogoMIPS: 6199.67
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 128 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] open_clip_torch==2.29.0
[pip3] optree==0.14.0
[pip3] pytorch-lightning==1.9.5
[pip3] pytorchvideo==0.1.5
[pip3] torch==2.6.0
[pip3] torch-stoi==0.2.3
[pip3] torchaudio==2.6.0
[pip3] torchcde==0.2.5
[pip3] torchcfm==1.0.0
[pip3] torchdiffeq==0.2.5
[pip3] torchdyn==1.0.6
[pip3] torchlibrosa==0.1.0
[pip3] torchmetrics==1.6.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] libopenvino-pytorch-frontend 2024.4.0 h5888daf_2 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] open-clip-torch 2.29.0 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] pytorchvideo 0.1.5 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torch-stoi 0.2.3 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchcde 0.2.5 pypi_0 pypi
[conda] torchcfm 1.0.0 pypi_0 pypi
[conda] torchdiffeq 0.2.5 pypi_0 pypi
[conda] torchdyn 1.0.6 pypi_0 pypi
[conda] torchlibrosa 0.1.0 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @oulgen @jamesjwu @aorenste @anijain2305 @laithsakka
| true
|
2,836,963,310
|
[Optimus] Include more corner cases in the select cat aten pass
|
mengluy0125
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"inductor_pattern_match"
] | 7
|
CONTRIBUTOR
|
Summary: Thanks to Shuai for reporting the bug in the pattern. We found there's a typo in the pass, where we should make sure all the selects will go to the cat node.
Test Plan:
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:split_cat_fx_aten_passes -- test_select_cat_post_grad
Buck UI: https://www.internalfb.com/buck2/2cd0888e-d803-43a8-8530-d97e6bc281b3
Test UI: https://www.internalfb.com/intern/testinfra/testrun/6192449699305108
Network: Up: 110KiB Down: 35KiB (reSessionID-687be0fa-031a-47a0-8780-5ab4cf4bbd94)
Executing actions. Remaining 0/4 6.6s exec time total
Command: test. Finished 2 local
Time elapsed: 2:12.0s
Tests finished: Pass 2. Fail 0. Fatal 0. Skip 0. Build failure 0
Differential Revision: D69278487
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,836,950,107
|
[cond] Refactor cond_op's signature to take *operands.
|
ydwu4
|
open
|
[
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146661
* #146660
This is a BC-breaking change for hop's IR schema. Previously,
```python
torch.cond(pred, true_fn, false_fn, (a, b))
# Old representation
torch.ops.higher_order.cond(pred, true_gm, false_gm, (a, b))
# New representation:
torch.ops.higher_order.cond(pred, true_gm, false_gm, a, b)
```
The benefits of this change is that it's much easier to construct the schema since the tuple is flattened. What's particularly troublesome about previous node is that it's hard to represent the mutation and alias information inside the tuple: we have to change the legacy schema parser and verify (maybe re-purpose) the aliasInfo to supports nested aliasInfo inside tuple/list.
We'll also refactor other control flow operators.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D69279033](https://our.internmc.facebook.com/intern/diff/D69279033)
| true
|
2,836,949,960
|
[hop][inductor] don't promote arg type for cond and while_loop
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146661
* __->__ #146660
Hop subgraph codegen assumes arguments's type are not promoted. Otherwise, we might generate wrong kernel.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D69279031](https://our.internmc.facebook.com/intern/diff/D69279031)
| true
|
2,836,932,104
|
[MTIA] (2/n) Implement PyTorch APIs to query/reset device peak memory usage
|
chaos5958
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: mtia"
] | 7
|
CONTRIBUTOR
|
Summary:
Public summary (shared with Github): This diff implements the correct version of the PyTorch API "max_memory_allocated".
Nit: The file previously contained two unit tests with the same name (due to wrong revert); I deleted a deprecated one to revamp the correct version.
Test Plan:
```
buck2 test //mtia/host_runtime/torch_mtia/tests:test_torch_mtia_api -- -r test_max_memory_allocated
```
https://www.internalfb.com/intern/testinfra/testrun/12103424065182810
Reviewed By: yuhc
Differential Revision: D68988435
cc @egienvalue
| true
|
2,836,931,882
|
[HOP] Mutation and alias rework
|
bohnstingl
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: inductor",
"module: dynamo"
] | 7
|
COLLABORATOR
|
This PR reworks the way the input mutations and various aliases are checked
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @ydwu4
| true
|
2,836,910,493
|
[FlexAttention] Fix dynamic shapes in max-autotune
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146657
# Fixes
https://github.com/pytorch/pytorch/issues/146624
### Updated
From offline discussion going w/ sizehint
However this does incur guards. I couldn't really think of a fancy way to do this. I was going to do `V.graph.sizevars.size_hint` w/ some default for num blocks, but we ultimately need some information about the input.
I am also not sure if size_hint is ALWAYS guaranteed to return the runtime value. I think it would be okay to not supported unbacked symints (maybe).
For instance, in the repro, we quickly hit the recompile limit.
```Shell
torch._dynamo hit config.recompile_limit (8)
function: 'flex_attention' (/home/drisspg/meta/pytorch/torch/nn/attention/flex_attention.py:1161)
last reason: 0/0: tensor 'L['key']' size mismatch at index 2. expected 1, actual 546
To log all recompilation reasons, use TORCH_LOGS="recompiles".
To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html.
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng
| true
|
2,836,900,548
|
Optimize isclose() for CPU and GPU by adding specific implementations
|
ZelboK
|
open
|
[
"module: cpu",
"triaged",
"open source",
"release notes: intel"
] | 1
|
CONTRIBUTOR
|
`isclose()` is currently quite slow, so this PR adds specific implementations for both CPU and cuda.
CUDA implementation seeing ~4.9x improvement at 100m elements and ~18.7x improvement at 400m elements
CPU implementation seeing ~5.7x improvements at 100m elements and ~5.9x improvements at 400m elements
simple benchmark used(adapt with CPU as needed):
```python
import time
import numpy as np
import torch
def benchmark_isclose(shape1, shape2, num_runs=5):
tensor1 = torch.randn(shape1, device="cuda")
tensor2 = tensor1.clone()
tensor2 += torch.randn_like(tensor2) * 0.001
# warn up
_ = torch.isclose(tensor1, tensor2)
torch.cuda.synchronize()
times = []
for _ in range(num_runs):
start_time = time.perf_counter()
_ = torch.isclose(tensor1, tensor2)
torch.cuda.synchronize()
end_time = time.perf_counter()
times.append(end_time - start_time)
mean_time = np.mean(times)
std_time = np.std(times)
return mean_time, std_time
test_shapes = [
(10000, 10000), # 100M elements
(20000, 20000), # 400M elements
]
print("\nBenchmarking torch.isclose():")
print("-" * 50)
for shape in test_shapes:
total_elements = np.prod(shape)
print(f"\nTensor shape: {shape} ({total_elements:,} elements)")
mean_time, std_time = benchmark_isclose(shape, shape)
print(f"Mean time: {mean_time*1000:.2f} ms +/- {std_time*1000:.2f} ms")
print(f"Elements per second: {total_elements/mean_time:,.0f}")
```
```
(optimized)
Tensor shape: (10000, 10000) (100,000,000 elements)
Mean time: 2.73 ms ± 0.26 ms
Elements per second: 36,611,905,024
Tensor shape: (20000, 20000) (400,000,000 elements)
Mean time: 8.98 ms ± 0.28 ms
Elements per second: 44,546,604,660
(unoptimized)
Tensor shape: (10000, 10000) (100,000,000 elements)
Mean time: 13.48 ms ± 0.28 ms
Elements per second: 7,420,814,236
Tensor shape: (20000, 20000) (400,000,000 elements)
Mean time: 166.90 ms ± 4.71 ms
Elements per second: 2,396,711,992
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @albanD
| true
|
2,836,886,440
|
add MXFP8 support to torch._scaled_mm
|
vkuzo
|
closed
|
[
"topic: improvements",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
# summary
Add blockwise MXFP8 support to `torch._scaled_mm` on CUDA capability 10.0 and higher devices. If the scales for A and B are of dtype `torch.float8_e8m0fnu`, we dispatch to the blockwise kernel from cuBLAS.
This is a skeleton PR where we test basic functionality (numerics of various simple matrices, as well as one end to end quantization + gemm).
Note that `scale_a` and `scale_b` are switched in either the cuBLAS kernel or in our wrapper, so this PR includes a manual hack to switch them again to hide this from the user. It would be good to figure this out at a future time, but IMO let's not block this PR on it.
We can tackle boundary conditions such as matrices not evenly divided into 128x128 tiles in future PRs.
Note that MXFP4 is not added in this PR - we can tackle that in a future PR.
This PR was created by taking https://github.com/pytorch/pytorch/pull/145562, switching e8m0 to in-core dtype, removing fp4 for now, and adding test cases.
# test plan
```
pytest test/test_matmul_cuda.py -k blockwise_mxfp8 -s
```
| true
|
2,836,855,253
|
[DCP] Introduce modules metadata in the storage_meta
|
saumishr
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 29
|
CONTRIBUTOR
|
Summary: Introduce the list of modules in the storage_meta which is shared between the planner and the storage writer. We will use it to let the storage writer know about the modules in the state dict and create module directories in the checkpoint.
Test Plan: UTs
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,836,842,594
|
windows Magma and cuda build for cu128
|
tinglvv
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: releng",
"ciflow/binaries_wheel",
"ci-no-td"
] | 16
|
COLLABORATOR
|
https://github.com/pytorch/pytorch/issues/145570
removing `.ci/pytorch/windows/internal/cuda_install.bat` as it is a duplicate with` .github/scripts/windows/cuda_install.bat`. The later one is the one in use - https://github.com/pytorch/pytorch/pull/146653/files#diff-613791f266f2f7b81148ca8f447b0cd6c6544f824f5f46a78a2794006c78957bR8
cc @atalman @ptrblck @nWEIdia
| true
|
2,836,826,872
|
[inductor] Better exception error messages for cache_on_self
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146652
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,836,797,935
|
[inductor] Use index_dtype (int32/int64 depending on size) for argmax accumulators
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146651
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,836,784,653
|
fuzzer: disable "fail_on_recompile_limit_hit" and "suppress_errors"
|
exclamaforte
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 21
|
CONTRIBUTOR
|
Summary:
needed for https://github.com/pytorch/pytorch/pull/146513
Test Plan:
the existing tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,836,771,124
|
Fix get_top() to return the base level event of the stack, not the most recently started event
|
jamesjwu
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146649
`get_top()` is really confusing when talking about a stack, because it can mean the most recently started event on the stack or the toplevel event in perfetto(which displays the stack upside down). Rename to `get_outermost` and fix the bug associated with it, so that it returns the correct value out of the stack.
Running nanogpt now puts `guard_latency_us` correctly in the `dynamo` event:
```
tlp python benchmarks/dynamo/torchbench.py --backend inductor --device cuda --only nanogpt --amp --cold-start-latency --print-compilation-time --training --performance 2>&1 --dynamic-shapes | tee out.log
```
<img width="1281" alt="image" src="https://github.com/user-attachments/assets/4eeb371a-4d81-415a-acc4-7d303a4b2a93" />
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,836,748,517
|
[MPS] Extend `torch.special.sinc` to complex
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146648
And to integral data types as well
Was too lazy to deduce the formula myself(or write a sympy script), but ChatGPT did a decent job of doing it, though it forgot that input must be multiplied by $$\pi$$:
```math
\text{Re}\left(\text{sinc}(x + i y)\right) = \frac{\sin(x)\cosh(y) x - \cos(x)\sinh(y) y}{x^2 + y^2}
```
```math
\text{Im}\left(\text{sinc}(x + i y)\right) = \frac{\cos(x)\sinh(y) x + \sin(x)\cosh(y) y}{x^2 + y^2}
```
| true
|
2,836,723,921
|
placeholder for shell dtype improvements in pytorch/pytorch
|
vkuzo
|
open
|
[
"module: internals",
"triaged"
] | 0
|
CONTRIBUTOR
|
### Context
Context: https://github.com/pytorch/pytorch/issues/146414
Tracking the issues we encounter as we add MX dtypes to core for fixing later.
#### printing of shell dtypes
We should be able to print a tensor with a shell dtype. Currenly this fails for bits, int1..7, uint1..7 dtypes. A workaround for `float4_e2m1fn_x2` is in https://github.com/pytorch/pytorch/pull/146578 (just print the uint8 representation), and IMO we should do the same for other shell dtypes without ops defined.
#### testing for shell dtypes
We should have a standardized test for shell dtypes, and ensure basic functionality (create tensor, save/load, view, cat, etc) works for all of them.
#### cleanups from e8m0 PR
old PR (with comments): https://github.com/pytorch/pytorch/pull/146427
new PR: https://github.com/pytorch/pytorch/pull/147462
List of cleanups:
* aten/src/ATen/DLConvertor.cpp use macro for float8 dtypes
* aten/src/ATen/native/cpu/FillKernel.cpp use macro to expand fill_kernel for float8 dtypes
* aten/src/ATen/native/cuda/Copy.cu float8 copy code is copy-pastaed, clean it up
* aten/src/ATen/native/cuda/jit_utils.h can we make typeName<at::Float8_e8m0fnu> generic
* c10/core/ScalarType.cpp - use macros to simplify getDtypeNames
* c10/core/ScalarType.h - If we expect to have numeric_limits for everything, let's just have a big macro for the whole thing.
If we're hardcoding it, let's just use the macro and a "true"/"false" below?
* c10/util/Float8_e8m0fnu.cpp - Can we have these in a single shared cpp file built with macro to remove the need for a new cpp file?
* c10/util/TypeCast.h - Can we make all these template specialization happen based off our apply macros?
* tools/pyi/gen_pyi.py - don't explicitly list dtypes here; get it from canonical source
* torch/csrc/utils/python_scalars.h - make store_scalar and load_scalar simpler with macros
* c10/util/Float8_e8m0fnu-inl.h - remove implicit conversions
* c10/util/Float8_e8m0fnu-inl.h - see if we need to rewrite without control flow
* c10/util/Float8_e8m0fnu.h - do we need to special case OPENCL?
### Versions
n/a
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh
| true
|
2,836,716,420
|
UNSTABLE trunk / linux-focal-rocm6.3-py3.10 / test
|
huydhn
|
closed
|
[
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 1
|
CONTRIBUTOR
|
Part of https://github.com/pytorch/pytorch/issues/146409
They run only in PR when `ciflow/trunk `is used https://github.com/pytorch/pytorch/pull/145629?
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,836,705,454
|
[Inductor] Add a unit test
|
desertfire
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Summary: To follow up https://github.com/pytorch/pytorch/pull/146293, add a JIT Inductor unit test. Other Triton template may need similar fixes.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,836,705,164
|
Scalar tensor fails to broadcast with shape of in-graph constructed NJT
|
jbschlosser
|
open
|
[
"triaged",
"module: nestedtensor",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
Repro:
```python
import torch
@torch.compile
def f(values, offsets):
nt = torch.nested.nested_tensor_from_jagged(values, offsets)
return torch.where(nt > 0.0, torch.ones_like(nt), 0)
values = torch.randn(10, 5, device="cuda")
offsets = torch.tensor([0, 2, 5, 7, 10], device="cuda")
output = f(values, offsets)
```
Error:
```
torch._inductor.exc.InductorError: LoweringException: NotImplementedError: inductor does not support layout=torch.jagged
target: aten.scalar_tensor.default
args[0]: 0
kwargs: {'dtype': torch.float32, 'layout': torch.jagged, 'device': device(type='cuda', index=0)}
```
This comes from trying to broadcast the `0` value with the NJT shape. The fix in #141500 does not work for an in-graph constructed NJT.
cc @cpuhrsch @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @aakhundov
| true
|
2,836,697,642
|
[inductor] Improve codegen for argmax+max
|
jansel
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"internal ramp-up task"
] | 7
|
CONTRIBUTOR
|
```py
@torch.compile(dynamic=True)
def fn(x):
return torch.max(x, -1)
```
generates the following code:
```py
@triton.jit
def triton_red_fused_max_0(in_ptr0, out_ptr0, out_ptr1, ks0, xnumel, r0_numel, XBLOCK : tl.constexpr, R0_BLOCK : tl.constexpr):
rnumel = r0_numel
RBLOCK: tl.constexpr = R0_BLOCK
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
xmask = xindex < xnumel
r0_base = tl.arange(0, R0_BLOCK)[None, :]
rbase = r0_base
x0 = xindex
_tmp2 = tl.full([XBLOCK, R0_BLOCK], float("-inf"), tl.float32)
_tmp4 = tl.full([XBLOCK, R0_BLOCK], float("-inf"), tl.float32)
_tmp4_index = tl.full([XBLOCK, R0_BLOCK], 9223372036854775807, tl.int64)
for r0_offset in range(0, r0_numel, R0_BLOCK):
r0_index = r0_offset + r0_base
r0_mask = r0_index < r0_numel
roffset = r0_offset
rindex = r0_index
r0_1 = r0_index
tmp0 = tl.load(in_ptr0 + (r0_1 + ks0*x0), xmask & r0_mask, eviction_policy='evict_first', other=0.0)
tmp1 = tl.broadcast_to(tmp0, [XBLOCK, R0_BLOCK])
tmp3 = triton_helpers.maximum(_tmp2, tmp1)
_tmp2 = tl.where(r0_mask & xmask, tmp3, _tmp2)
_tmp4_next, _tmp4_index_next = triton_helpers.maximum_with_index(
_tmp4, _tmp4_index, tmp1, rindex
)
_tmp4 = tl.where(r0_mask & xmask, _tmp4_next, _tmp4)
_tmp4_index = tl.where(r0_mask & xmask, _tmp4_index_next, _tmp4_index)
tmp2 = triton_helpers.max2(_tmp2, 1)[:, None]
tmp4_val, tmp4_idx = triton_helpers.max_with_index(_tmp4, _tmp4_index, 1)
tmp4 = tmp4_idx[:, None]
tl.store(out_ptr0 + (x0), tmp2, xmask)
tl.store(out_ptr1 + (x0), tmp4, xmask)
```
This could could be improved by doing:
```diff
diff --git a/out.py b/out.py
index 5d0acd594f7..5c3879867ed 100644
--- a/out.py
+++ b/out.py
@@ -8,7 +8,6 @@ def triton_red_fused_max_0(in_ptr0, out_ptr0, out_ptr1, ks0, xnumel, r0_numel, X
r0_base = tl.arange(0, R0_BLOCK)[None, :]
rbase = r0_base
x0 = xindex
- _tmp2 = tl.full([XBLOCK, R0_BLOCK], float("-inf"), tl.float32)
_tmp4 = tl.full([XBLOCK, R0_BLOCK], float("-inf"), tl.float32)
_tmp4_index = tl.full([XBLOCK, R0_BLOCK], 9223372036854775807, tl.int64)
for r0_offset in range(0, r0_numel, R0_BLOCK):
@@ -19,15 +18,13 @@ def triton_red_fused_max_0(in_ptr0, out_ptr0, out_ptr1, ks0, xnumel, r0_numel, X
r0_1 = r0_index
tmp0 = tl.load(in_ptr0 + (r0_1 + ks0*x0), xmask & r0_mask, eviction_policy='evict_first', other=0.0)
tmp1 = tl.broadcast_to(tmp0, [XBLOCK, R0_BLOCK])
- tmp3 = triton_helpers.maximum(_tmp2, tmp1)
- _tmp2 = tl.where(r0_mask & xmask, tmp3, _tmp2)
_tmp4_next, _tmp4_index_next = triton_helpers.maximum_with_index(
_tmp4, _tmp4_index, tmp1, rindex
)
_tmp4 = tl.where(r0_mask & xmask, _tmp4_next, _tmp4)
_tmp4_index = tl.where(r0_mask & xmask, _tmp4_index_next, _tmp4_index)
- tmp2 = triton_helpers.max2(_tmp2, 1)[:, None]
tmp4_val, tmp4_idx = triton_helpers.max_with_index(_tmp4, _tmp4_index, 1)
tmp4 = tmp4_idx[:, None]
+ tmp2 = tmp4_val[:, None]
tl.store(out_ptr0 + (x0), tmp2, xmask)
tl.store(out_ptr1 + (x0), tmp4, xmask)
```
because the `argmax` already compute the `amax`, so we don't need a separate reduction.
We could either:
1) Have a single two-output reduction op that does both `amax+argmax`
2) Combining the two at codegen time using the reduction cache. (We could use a DeferredLine, to swap between `triton_helpers.max_with_index` and `triton_helpers.max2` based on if the output is used.)
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @aakhundov
| true
|
2,836,695,721
|
[poc] force UntypedStorage.from_buffer(buf) to return meta storage under FakeTensorMode
|
bdhirsh
|
closed
|
[
"Merged",
"release notes: composability"
] | 2
|
CONTRIBUTOR
|
context here: https://fb.workplace.com/groups/326136610199609/permalink/495389539940981/
This PR is an attempt to make it such that if you create a tensor from an external buffer (using `UntypedStorage.from_buffer(buf)`, we can generate a proper fake tensor for you out of the box.
The annoying bit is that there are not any dispatcher ops to interpose on and change behavior. So instead, I took the manual C binding and tweaked the storage device to be "meta' if we see an active fake mode.
Put "poc" in the title since I... think this is hopefully reasonable, but I can be convinced that it's not :)
```
from torch._subclasses.fake_tensor import FakeTensorMode
import pickle
import io
import torch
from contextlib import nullcontext
use_fake_tensor = True
with FakeTensorMode() if use_fake_tensor else nullcontext():
obj = [1, 2]
f = io.BytesIO()
pickle.Pickler(f).dump(obj)
byte_storage = torch.ByteStorage._from_buffer(f.getvalue()) # type: ignore[attr-defined]
t = torch.ByteTensor(byte_storage)
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #133044
* #146731
* #146729
* __->__ #146642
| true
|
2,836,693,565
|
[dim order] solve broken doc
|
Gasoonjia
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146641
Differential Revision: [D69265340](https://our.internmc.facebook.com/intern/diff/D69265340/)
| true
|
2,836,661,764
|
POC for mixed prec optim frontend
|
janeyx99
|
open
|
[
"Stale",
"release notes: optim"
] | 2
|
CONTRIBUTOR
|
This PR is a prototype for what a frontend for asking for mixed precision can look like torch.optim through set_dtype_policy in optimizer.py.
This is not meant to be landable but to start some discussions on what people want/would like to see and to ask if there are things I haven't considered yet.
This currently only works with Adam(W)!
A toy script for how to use:
```
import torch
model = torch.nn.Sequential(
torch.nn.Linear(2, 3),
torch.nn.Sigmoid(),
torch.nn.Linear(3, 1),
torch.nn.Sigmoid(),
)
model.to("cuda")
optim = torch.optim.AdamW(model.named_parameters(), foreach=False)
mp_policy = {
"exp_avg": lambda _: torch.bfloat16,
"exp_avg_sq": lambda _: torch.bfloat16,
"max_exp_avg_sq": lambda _: torch.bfloat16,
}
optim.set_dtype_policy(mp_policy)
i = torch.tensor([0.1, 0.2, 0.3, 0.4, 0.5, 0.6], device="cuda").reshape(3, 2)
l = model(i).sum()
l.backward()
optim.step()
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147653
* __->__ #146640
| true
|
2,836,653,407
|
[ONNX] Adjust and add deprecation messages
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: onnx",
"topic: deprecation",
"ci-no-td"
] | 14
|
COLLABORATOR
|
Adjust and add deprecation messages to torch.onnx utilities and verification methods because they are only related to torch script and are obsolete.
Removed unused `_exporter_states.py` and removed the internal deprecation module in favor of the typing_extensions deprecated decorator.
| true
|
2,836,597,113
|
use None to slice when list has one element only
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
When autotune_num_choices_displayed is None and the list of choices has length 1, slicing with `[:-1]` means getting all elements except the last one, which resulted in an empty list.
Slicing with `[:None]` works.
Differential Revision: D69265168
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,836,588,986
|
gloo: fix building system gloo with CUDA/HIP
|
nlbrown2
|
open
|
[
"module: build",
"module: cuda",
"triaged",
"open source",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Fix incorrect linking of Gloo's libraries when building with system Gloo. Previously, either Gloo's native library or Gloo's CUDA library were linked. However, Gloo had changed such that all users of Gloo must link the native library, and can optionally link the CUDA or HIP library for Gloo + CUDA/HIP support.
This had been updated when building/linking with vendored Gloo, but not when using system Gloo.
Fixes: #146239
Reported-by: Adam J Stewart <ajstewart426@gmail.com>
cc @malfet @seemethere @ptrblck @msaroufim @eqy
| true
|
2,836,582,754
|
example repro failure
|
c00w
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146636
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,836,580,218
|
ByteTensor fails under FakeTensorMode()
|
haibchen
|
open
|
[
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The code block blow reproduces the error, "Attempted to set the storage of a tensor on device "meta" to a storage on different device "cpu". This is no longer allowed; the devices must match" . With FakeTensorMode, the tensor is allocated on 'CPU' device just fine.
```
from torch._subclasses.fake_tensor import FakeTensorMode
import pickle
import io
import torch
from contextlib import nullcontext
use_fake_tensor = True
with FakeTensorMode() if use_fake_tensor else nullcontext():
obj = [1, 2]
f = io.BytesIO()
pickle.Pickler(f).dump(obj)
byte_storage = torch.ByteStorage._from_buffer(f.getvalue()) # type: ignore[attr-defined]
t = torch.ByteTensor(byte_storage)
```
stack trace
```
RuntimeError Traceback (most recent call last)
Cell In[9], line 13
11 pickle.Pickler(f).dump(obj)
12 byte_storage = torch.ByteStorage._from_buffer(f.getvalue()) # type: ignore[attr-defined]
---> 13 t = torch.ByteTensor(byte_storage)
File /mnt/xarfuse/uid-434836/384234e3-seed-nspid4026531836_cgpid282904266-ns-4026531841/torch/utils/_stats.py:27, in count.<locals>.wrapper(*args, **kwargs)
25 simple_call_counter[fn.__qualname__] = 0
26 simple_call_counter[fn.__qualname__] = simple_call_counter[fn.__qualname__] + 1
---> 27 return fn(*args, **kwargs)
File /mnt/xarfuse/uid-434836/384234e3-seed-nspid4026531836_cgpid282904266-ns-4026531841/torch/_subclasses/fake_tensor.py:1269, in FakeTensorMode.__torch_dispatch__(self, func, types, args, kwargs)
1265 assert (
1266 torch._C._get_dispatch_mode(torch._C._TorchDispatchModeKey.FAKE) is None
1267 ), func
1268 try:
-> 1269 return self.dispatch(func, types, args, kwargs)
1270 except TypeError:
1271 log.exception("fake tensor raised TypeError")
File /mnt/xarfuse/uid-434836/384234e3-seed-nspid4026531836_cgpid282904266-ns-4026531841/torch/_subclasses/fake_tensor.py:1810, in FakeTensorMode.dispatch(self, func, types, args, kwargs)
1807 return func(*args, **kwargs)
1809 if self.cache_enabled:
-> 1810 return self._cached_dispatch_impl(func, types, args, kwargs)
1811 else:
1812 return self._dispatch_impl(func, types, args, kwargs)
File /mnt/xarfuse/uid-434836/384234e3-seed-nspid4026531836_cgpid282904266-ns-4026531841/torch/_subclasses/fake_tensor.py:1380, in FakeTensorMode._cached_dispatch_impl(self, func, types, args, kwargs)
1377 FakeTensorMode.cache_bypasses[e.reason] += 1
1379 if output is _UNASSIGNED:
-> 1380 output = self._dispatch_impl(func, types, args, kwargs)
1382 return output
File /mnt/xarfuse/uid-434836/384234e3-seed-nspid4026531836_cgpid282904266-ns-4026531841/torch/_subclasses/fake_tensor.py:2381, in FakeTensorMode._dispatch_impl(self, func, types, args, kwargs)
2379 try:
2380 with in_kernel_invocation_manager(self):
-> 2381 r = func(*args, **kwargs)
2382 except NotImplementedError as not_implemented_error:
2383 return maybe_run_unsafe_fallback(not_implemented_error)
File /mnt/xarfuse/uid-434836/384234e3-seed-nspid4026531836_cgpid282904266-ns-4026531841/torch/_ops.py:756, in OpOverload.__call__(self, *args, **kwargs)
755 def __call__(self, /, *args, **kwargs):
--> 756 return self._op(*args, **kwargs)
RuntimeError: Attempted to set the storage of a tensor on device "meta" to a storage on different device "cpu". This is no longer allowed; the devices must match.
```
### Versions
fb-internal
cc @ezyang @albanD @chauhang @penguinwu @eellison @zou3519 @bdhirsh @yf225
| true
|
2,836,519,538
|
Add Structured Tracing for Traced Graph Edge Details for AC Debugging
|
basilwong
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 16
|
CONTRIBUTOR
|
Summary:
Updating the structured trace infrastructure so that we are able to output to Zoomer and have an E2E solution.
Context Doc: https://docs.google.com/document/d/1T6omIBEWVhbOiwDLSLffgQwjxiT2rQv8QvvQwXkw4fY/edit?usp=sharing
Test Plan:
### Testing Structured Log + tlparse locally
Command:
```
TORCH_TRACE=/data/users/basilwong/fbsource/fbcode/log_torch_trace buck2 run mode/opt //aps_models/ads/icvr:icvr_launcher -- mode=local_fb_fm_v4 launcher.num_workers=2
```
Torch Trace Logs (local then sent to paste): P1686419449
```
cat log_torch_trace/dedicated_log_torch_trace_rank_0_2lg012xo.log | pastry
P1686419449
```
tlparse output: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpyiv5wj/rank_1/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
tlparse graph edge details output: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpyiv5wj/rank_1/9_0_0/joint_graph_information_397.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
Differential Revision: D61557220
| true
|
2,836,464,358
|
[NJT] Fix inference mode for composite implicit ops without nested-specific kernel
|
soulitzer
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: nested tensor"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #146633
| true
|
2,836,339,614
|
[ROCm] OCP FP8 Support for new GPUs
|
petrex
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: rocm",
"release notes: linalg_frontend",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
TLDR: Follow up/ Build on top of https://github.com/pytorch/pytorch/pull/144476. add OCP FP8 support for gfx950
refer to https://github.com/pytorch/ao/pull/1677
This pull request includes several changes to improve compatibility and support for new GPU architectures and data types, particularly for ROCm. The key updates involve adding support for new ROCm versions and GPU architectures, updating data type handling, and removing outdated checks.
### Improvements to GPU Architecture and ROCm Version Support:
* [`aten/src/ATen/Context.cpp`](diffhunk://#diff-33de472d304acbe57d693c8567370c638068bedc1aa0ce8e9dc115dad05a7810L323-R326): Added support for new GPU architectures `gfx1200`, `gfx1201`, and `gfx950` based on ROCm version checks.
* [`aten/src/ATen/native/cuda/Blas.cpp`](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abL196-R199): Updated architecture support in multiple functions to include `gfx1200`, `gfx1201`, and `gfx950` based on ROCm version checks. [[1]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abL196-R199) [[2]](diffhunk://#diff-e8a569efee1e650172f120a0fdcda024fe3e4703a4ee3336425c8f685af6b3abL865-R876)
### Updates to Data Type Handling:
* [`aten/src/ATen/cuda/CUDADataType.h`](diffhunk://#diff-9188bb13b1a49f459141f5f9b875593d1c5ce2beb5ad711fdbaf5bc7089ec015L81-L98): Enhanced data type conversion to include new float8 types for both CUDA and ROCm environments.
* [`aten/src/ATen/cuda/tunable/GemmHipblaslt.h`](diffhunk://#diff-bfa1a3b5d4bef1892bf50338775f3b0fd8cd31fc1868148f3968b98aefb68e3fL29-R80): Updated `HipDataTypeFor` template to handle new float8 types and added hard-coded enum values for ROCm versions prior to 6.3.
### Removal of Outdated Checks:
* [`cmake/public/LoadHIP.cmake`](diffhunk://#diff-b98e27b9a5f196a6965a99ee5a7bb15b3fc633d6375b767635b1b04ccb2fd3d5L169-L197): Removed the check for `HIP_NEW_TYPE_ENUMS` as it is no longer necessary with the updated ROCm versions. [[1]](diffhunk://#diff-b98e27b9a5f196a6965a99ee5a7bb15b3fc633d6375b767635b1b04ccb2fd3d5L169-L197) [[2]](diffhunk://#diff-b98e27b9a5f196a6965a99ee5a7bb15b3fc633d6375b767635b1b04ccb2fd3d5L211-R182)
These changes ensure better compatibility and performance on newer hardware and software environments, particularly for users leveraging ROCm and CUDA for deep learning and scientific computing tasks.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,836,320,869
|
Support ignoring parameters in FSDP2
|
ckluk2
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 24
|
CONTRIBUTOR
|
Differential Revision: D69153051
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,836,270,226
|
[torch] fix builds for older pybind
|
suo
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 4
|
MEMBER
|
Summary:
some versions of pybind we build with don't have `py::set_error`.
So just use the underlying python C API.
Test Plan: unit tests
Differential Revision: D69254629
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.