id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,788,707,982
|
[MPSInductor] Support `abs` in MetalPrintExpr
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144827
* __->__ #144826
* #144796
* #144795
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,788,678,605
|
Fix MPS returns 0 on OOB
|
JoeyShapiro
|
closed
|
[
"triaged",
"open source",
"Stale",
"release notes: mps"
] | 2
|
NONE
|
Fixes #144824
This adds additional checks to the MPS code to make sure the index is in range. If the index is out of range, it will cause a crash. This has caused me grief when testing code on MPS, then sending it CUDA. And would have gone unnoticed if I were to fully train on MPS, making my model perform undefined behavior.
I have done this the most performant way I can think of, and am open to ideas. This is caused by MPS returning undefined behavior if the index of `mpsGraph gatherWithUpdatesTensor` is out of bounds. While this is intentional for MPS, I feel like PyTorch on MPS should follow what CUDA and the CPU do, which is crash. This lowers the performance of MPS, but would make it more usable in my mind.
I could also add some tests if needed, but I'm not sure if the testing suite could handle an intentional crash. But I would be more than happy to add it if someone pointed me in the right direction.
| true
|
2,788,672,010
|
[MPS] Indexing Returns 0 if OOB
|
JoeyShapiro
|
open
|
[
"module: error checking",
"triaged",
"module: mps"
] | 3
|
NONE
|
### 🐛 Describe the bug
Using PyTorch to train a model on MacOS worked fine, so I switched to using CUDA, where it would crash. The issue is that CUDA will crash if you index out of bounds, along with CPU, MPS will return a 0. This causes an inconsistency in models, and will result in undefined behavior on MPS. This was tested on PyTorch version `2.1.1+cu121` and the latest version on GitHub main at the time of the issue creation
The following code was tested on each platform
```python
import torch
import torch.nn as nn
```
```python
# CPU
embed = nn.Embedding(10, 2).to('cpu')
t = torch.tensor(10).to('cpu')
print(embed(t))
"""
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[2], line 3
1 embed = nn.Embedding(10, 2).to('cpu')
2 t = torch.tensor(10).to('cpu')
----> 3 print(embed(t))
...
IndexError: index out of range in self
"""
```
```python
# CUDA
embed = nn.Embedding(10, 2).to('cuda')
t = torch.tensor(10).to('cuda')
print(embed(t))
"""
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[2], line 3
1 embed = nn.Embedding(10, 2).to('cuda')
2 t = torch.tensor(10).to('cuda')
----> 3 print(embed(t))
...
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
"""
```
```python
# MPS
embed = nn.Embedding(10, 2).to('mps')
t = torch.tensor(10).to('mps')
print(embed(t))
"""
tensor([0., 0.], device='mps:0', grad_fn=<EmbeddingBackward0>)
"""
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.30.1
Libc version: N/A
Python version: 3.13.1 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 10:38:40) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit-Mach-O
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] optree==0.13.1
[pip3] torch==2.6.0a0+git0431d47
[conda] numpy 2.2.1 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0a0+git0431d47 pypi_0 pypi
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,788,671,765
|
Fix torch.normal ignores default_device
|
zeshengzong
|
closed
|
[
"open source",
"Stale"
] | 3
|
CONTRIBUTOR
|
Following #144070 to Fixes #122886
| true
|
2,788,666,272
|
Performance regression when using @torch.compile compared to no compilation
|
vladkvit
|
closed
|
[
"needs reproduction",
"module: performance",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I was playing around with torch.compile functionality, and came across a ~20x slowdown when running this toy code (counts even numbers):
```
import torch
from datetime import datetime
batch_size = 1024 * 8000
final_num = 2147483647
num_batches = final_num // batch_size
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("device", device)
# @torch.compile
def bench():
final_total = torch.tensor(0, dtype=torch.int32, device=device)
for i in range(num_batches):
batch = torch.arange(
i * batch_size, (i + 1) * batch_size, dtype=torch.int32, device=device
)
batch_sum = torch.sum(batch % 2)
final_total += batch_sum
return final_total
a = datetime.now()
final_total = bench()
b = datetime.now()
print(f"Final Total: {final_total.item()}, {b-a} seconds")
```
On my 4090, I am seeing 0.13 seconds without the decorator and 2.3 seconds with the decorator (tested by running multiple times; excluding the very first run when the compilation happens). Is this expected behavior?
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro (10.0.26100 64-bit)
GCC version: Could not collect
Clang version: 19.1.4
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: True
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.85
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD Ryzen 7 7800X3D 8-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 4201
MaxClockSpeed: 4201
L2CacheSize: 8192
L2CacheSpeed: None
Revision: 24834
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.0
[pip3] onnxruntime==1.17.3
[pip3] pytorch-ignite==0.5.1
[pip3] pytorch-lightning==2.1.4
[pip3] pytorch-metric-learning==2.3.0
[pip3] rapidocr-onnxruntime==1.3.24
[pip3] torch==2.5.1+cu124
[pip3] torch_geometric==2.4.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchao==0.7.0+cpu
[pip3] torchaudio==2.5.1+cu124
[pip3] torchmetrics==1.2.1
[pip3] torchtune==0.3.1
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
cc @msaroufim @chauhang @penguinwu
| true
|
2,788,649,799
|
Change back to 'linux.rocm.gpu.2'.
|
amdfaa
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Due to limited upstream CI capacity, we thought it would best to have periodic run on all the available upstream CI nodes.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,788,648,887
|
[dynamo] Do not always skip code objects unconditionally
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
MEMBER
|
Currently, when Dynamo determines that a frame should be skipped, we will also skip tracing all future calls to the same code object. This can cause issues when skipping a frame is dependent on inputs to the function:
```python
import torch
@torch.compile(dynamic=False)
def fn(x, n):
if n == 0:
try:
# causes frame to be skipped
torch._dynamo.graph_break()
finally:
pass
if torch.compiler.is_compiling():
return x + 1
return x - 1
print(fn(torch.ones(3), 0)) # skipped
print(fn(torch.ones(3), 1)) # skipped
import torch._dynamo
torch._dynamo.reset()
print(fn(torch.ones(3), 1)) # compiled!
print(fn(torch.ones(3), 0)) # skipped
# Output:
# tensor([0., 0., 0.])
# tensor([0., 0., 0.])
# tensor([2., 2., 2.])
# tensor([0., 0., 0.])
```
We see that whether `fn(torch.ones(3), 1)` gets compiled is dependent on calling order! This makes it more difficult to understand the PT2 programming model. Thus, when skipping a frame is condition-dependent, we shouldn't skip the code object unconditionally - we should instead just skip the current frame and use guards to check if a future call should also skip/fall back to eager.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,788,645,880
|
restore rng generation for fbcode
|
ngimel
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 16
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,788,631,856
|
[aarch64] multiple inductor test failures related to vec128_bfloat16
|
tinglvv
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 5
|
COLLABORATOR
|
### 🐛 Describe the bug
Observing below errors on Grace Hopper GPU across multiple inductor tests
```
/usr/local/lib/python3.12/dist-packages/torch/include/ATen/cpu/vec/vec128/vec128_bfloat16_neon.h:83:37: error: cannot convert ‘__Uint16x8_t’ to ‘__Bfloat16x8_t’
83 | return vreinterpretq_u16_bf16(val); \
| ^~~
| |
| __Uint16x8_t
/usr/local/lib/python3.12/dist-packages/torch/include/ATen/cpu/vec/vec128/vec128_bfloat16_neon.h:99:1: note: in expansion of macro ‘IMPLEMENT_AT_BF16_SHIM’
99 | IMPLEMENT_AT_BF16_SHIM(bf16)
| ^~~~~~~~~~~~~~~~~~~~~~
```
example command
```
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=5 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_new_zeros_cuda_int32
```
@swolchok Looks related to https://github.com/pytorch/pytorch/pull/139090, could you help take a look? Thanks!
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ptrblck @nWEIdia @eqy @Aidyn-A
### Versions
it's an internal container, please let me know if this info is needed.
| true
|
2,788,572,319
|
dynamo: Don't crash with internal error if getattr on a tensor fails
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144817
This prevents crashes when getattr is called on a tensor for something
which doesn't exist.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,788,571,850
|
[CD] Fix slim-wheel nvjit-link import problem
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
When other toolkit (say CUDA-12.3) is installed and `LD_LIBRARY_PATH` points to there, import torch will fail with
```
ImportError: /usr/local/lib/python3.10/dist-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkComplete_12_4, version libnvJitLink.so.12
```
It could not be worked around by tweaking rpath, as it also depends on the library load order, which are not guaranteed by any linker. Instead solve this by preloading `nvjitlink` right after global deps are loaded, by running something along the lines of the following
```python
if version.cuda in ["12.4", "12.6"]:
with open("/proc/self/maps") as f:
_maps = f.read()
# libtorch_global_deps.so always depends in cudart, check if its installed via wheel
if "nvidia/cuda_runtime/lib/libcudart.so" in _maps:
# If all abovementioned conditions are met, preload nvjitlink
_preload_cuda_deps("nvjitlink", "libnvJitLink.so.*[0-9]")
```
Fixes https://github.com/pytorch/pytorch/issues/140797
| true
|
2,788,564,082
|
Support kernel options when flex_attention compiled with dynamic=True
|
sjain-profluent
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
NONE
|
### 🐛 Describe the bug
I am trying to use the kernel_options to specify lower block sizes to improve flex attention performance for specific bidirectional-causal masking pattern that depend on a secondary tensor (here `sequence_ids` in the code below). When I run the code with `dynamic=False` , it runs without error but it seems specific kernel_options (BLOCK_M and BLOCK_N) doesn't work with using `dynamic=True`.
I am not sure if this is expected behavior or a bug since specifying the options seem to have substantial difference in performance with dynamic=False.
I am using the most recent nightly release (pytorch==2.7.0.dev20250114)
```python
import torch
import torch.nn.functional as F
from torch.nn.attention.flex_attention import create_block_mask, flex_attention
from triton.testing import do_bench
torch.set_default_device("cuda")
flex_attention = torch.compile(flex_attention, dynamic=True)
def create_block_causal_mask(sequence_ids):
def block_causal_mask_fn(b, h, q_idx, kv_idx):
return sequence_ids[b, q_idx] >= sequence_ids[b, kv_idx]
B, seqlen = sequence_ids.shape
return create_block_mask(block_causal_mask_fn, B, 1, seqlen, seqlen)
q = torch.randn(8, 8, 8192, 64, dtype=torch.float16)
k = torch.randn(8, 8, 8192, 64, dtype=torch.float16)
v = torch.randn(8, 8, 8192, 64, dtype=torch.float16)
sequence_ids = torch.cat(
[torch.arange(375 + i).repeat(375 + i, 1).transpose(-1, -2).reshape(-1)[:8192][None, :] for i in range(8)],
dim=0,
)
block_causal_mask = create_block_causal_mask(sequence_ids)
print("Sparsity: ", block_causal_mask.sparsity())
print("Flex (w/o kernel options): ", do_bench(lambda: flex_attention(q, k, v, block_mask=block_causal_mask)))
print("Flex (w kernel options): ", do_bench(lambda: flex_attention(q, k, v, block_mask=block_causal_mask, kernel_options={'BLOCK_M':64, 'BLOCK_N':64})))
```
Error Message:
```
Sparsity: 46.209716796875
Flex (w/o kernel options): 6.643037796020508
Traceback (most recent call last):
File "/workspace/protein-foundation/test/test_flex.py", line 69, in <module>
print("Flex (w kernel options): ", do_bench(lambda: flex_attention(q, k, v, block_mask=block_causal_mask, kernel_options={'BLOCK_M':64, 'BLOCK_N':64})))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/triton/testing.py", line 117, in do_bench
fn()
File "/workspace/protein-foundation/test/test_flex.py", line 69, in <lambda>
print("Flex (w kernel options): ", do_bench(lambda: flex_attention(q, k, v, block_mask=block_causal_mask, kernel_options={'BLOCK_M':64, 'BLOCK_N':64})))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/torch/_dynamo/eval_frame.py", line 580, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/torch/_inductor/compile_fx.py", line 704, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/usr/lib/python3/dist-packages/torch/_inductor/compile_fx.py", line 689, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/torch/_inductor/compile_fx.py", line 1149, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/torch/_inductor/compile_fx.py", line 1064, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/torch/_inductor/graph.py", line 1977, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/torch/_inductor/graph.py", line 2018, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/torch/_inductor/codecache.py", line 2768, in load_by_key_path
mod = _reload_python_module(key, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/torchinductor_root/46/c46tlmbpx7yexnjmhedzeejtwa53ghcbrwxufgezv7lkxc7775oj.py", line 500, in <module>
meta0 = {'BLOCK_M': s6, 'BLOCK_N': s8, 'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'WRITE_DQ': True, 'OUTPUT_LOGSUMEXP': True, 'FLOAT32_PRECISION': "'ieee'", 'IS_DIVISIBLE': False, 'SM_SCALE': 0.125, 'GQA_SHARED_HEADS': 1, 'HAS_FULL_BLOCKS': True, 'QK_HEAD_DIM': 64, 'V_HEAD_DIM': 64, 'SPARSE_Q_BLOCK_SIZE': 128, 'SPARSE_KV_BLOCK_SIZE': 128}
^^
torch._inductor.exc.InductorError: NameError: name 's6' is not defined
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Output (when dynamic=False):
```
Sparsity: 46.209716796875
Flex (w/o kernel options): 5.270411014556885
Flex (w kernel options): 1.8650240898132324
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250114+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.31
Python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1048-oracle-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 224
On-line CPU(s) list: 0-223
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8480+
Stepping: 8
CPU MHz: 2000.000
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 5.3 MiB
L1i cache: 3.5 MiB
L2 cache: 224 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] pytorch-ranger==0.1.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250114+cu124
[pip3] torch-optimizer==0.3.0
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.22.0.dev20250114+cu124
[pip3] triton==3.0.0
[conda] Could not collect
cc @chauhang @penguinwu @ezyang @bobrenjc93 @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,788,552,934
|
[BE] Parametrize `test_min_max`
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144251
* #144250
* __->__ #144249
It's better to have one unit test per dtype rather a combined one
| true
|
2,788,543,173
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 12
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,788,529,647
|
Fix the wrong artifact in remaining workflows
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/inductor-perf-compare",
"ciflow/inductor-micro-benchmark",
"ciflow/inductor-micro-benchmark-cpu-x86"
] | 3
|
CONTRIBUTOR
|
I missed them in https://github.com/pytorch/pytorch/pull/144694 as they weren't run often. But they are still failing nonetheless, i.e. https://github.com/pytorch/pytorch/actions/runs/12762640334/job/35578870178
The issue was from https://github.com/pytorch/pytorch/pull/125401 where it added `use-gha: ${{ inputs.use-gha }}` to linux_test workflow.
| true
|
2,788,529,565
|
remove allow-untyped-defs from nn/utils/_expanded_weights/conv_expanded_weights.py
|
bobrenjc93
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144811
| true
|
2,788,481,417
|
Support torch.func.grad for Flex Attention
|
cora-codes
|
open
|
[
"triaged",
"oncall: pt2",
"module: functorch",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 9
|
NONE
|
### 🚀 The feature, motivation and pitch
Currently, flex attention does not support `torch.func.grad`:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
torch.set_default_device("cuda")
q = torch.randn(1, 1, 1, 16)
k = torch.randn(1, 1, 1, 16)
v = torch.randn(1, 1, 1, 16)
torch.func.grad(flex_attention)(q, k, v)
```
```
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_ops.py", line 471, in wrapper
return self.dispatch(
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_ops.py", line 341, in dispatch
return dispatch_functorch(self, args, kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/pyfunctorch.py", line 294, in dispatch_functorch
return interpreter.process(op, args, kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/pyfunctorch.py", line 171, in process
kernel = op.functorch_table[TransformType.Grad]
KeyError: <TransformType.Grad: 2>
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @zou3519 @Chillee @samdow @kshitij12345 @ydwu4 @bdhirsh @yf225 @drisspg @yanboliang @BoyuanFeng
| true
|
2,788,452,788
|
use cooperative schedule in scaled_mm for fast_accum=false
|
ngimel
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 3
|
COLLABORATOR
|
This improves perf for large matrices by more than 2x, more detailed benchmark coming.
On master

On this branch
<img width="601" alt="image" src="https://github.com/user-attachments/assets/7f55152b-1110-45e4-b2ea-6f274d543869" />
A plot similar to https://github.com/pytorch/ao/pull/1325#discussion_r1868193786
<details>
<summary>Benchmarking code:</summary>
```python
import torch
from triton.testing import do_bench
import itertools
def fn_aten_scales(a, b, scale_a, scale_b, use_fast_accum=False):
return torch._scaled_mm(a, b.t(), scale_a.view(-1, 1), scale_b.view(1, -1), use_fast_accum=use_fast_accum, out_dtype=torch.bfloat16)
def fn_aten(a, b, scale, use_fast_accum=False):
return torch._scaled_mm(a, b.t(), scale, scale, use_fast_accum=use_fast_accum, out_dtype=torch.bfloat16)
for i,j,k in itertools.product(range(9, 15), range(9, 15), range(9, 15)):
m = 2**i
n = 2**j
k = 2**k
a=torch.randn(m, k, device="cuda").to(dtype=torch.float8_e4m3fn)
b=torch.randn(n, k, device="cuda").to(dtype=torch.float8_e4m3fn)
scale_a = torch.randint(1, 11, (a.shape[0],), device="cuda", dtype=torch.float32)
scale_b = torch.randint(1, 11, (b.shape[0],), device="cuda", dtype=torch.float32)
scale_0 = torch.randn((), device="cuda", dtype=torch.float32)
ms_rowwise_fast = do_bench(lambda: fn_aten_scales(a, b, scale_a, scale_b, use_fast_accum=True), warmup=25, rep=50)
ms_rowwise_slow = do_bench(lambda: fn_aten_scales(a, b, scale_a, scale_b, use_fast_accum=False), warmup=25, rep=50)
ms_tensor_fast = do_bench(lambda: fn_aten(a, b, scale_0, use_fast_accum=True), warmup=25, rep=50)
ms_tensor_slow = do_bench(lambda: fn_aten(a, b, scale_0, use_fast_accum=False), warmup=25, rep=50)
print(f"m={m}, n={n}, k={k}, fast={ms_rowwise_fast}, slow={ms_rowwise_slow}, ratio_tw={ms_tensor_slow /ms_tensor_fast}, ratio_rw={ms_rowwise_slow / ms_rowwise_fast}")
```
</details>
Higher N/K values still have about 40% penalty, perhaps some additional heuristics tweaks would be useful.
| true
|
2,788,436,356
|
Remove C10_EMBEDDED
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144808
I added this to support code sharing with ExecuTorch, but the operator<< overrides are load-bearing for builds -- we have other code that attempts to pretty-print Half/BFloat16, and implicit conversions can't be used to make that work because there are *multiple* implicit conversions from Half/BFloat16 to primitive types, so which one to select is ambiguous. Also, we don't actually seem to need it now in ExecuTorch core because we have `include <ostream>` in there at the moment anyway.
| true
|
2,788,418,444
|
[inductor][BE] don't try/except ImportError for AttrsDescriptor versions
|
davidberard98
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144807
motivation: Ed's advice to avoid `except ImportError` (i.e. based on the fact that your target module/class might in fact exist, but you might run into some different ImportError whose stacktrace you now ignore).
additional motivation: I'm going to add some more cases to this list, and would like to avoid this pattern:
```
try:
...
except ImportError:
try:
...
except ImportError:
try:
...
```
suggestions on better ways to do this would be appreciated!
test: ran with triton commit e5be006a (last working commit) and 34a6a2ff8 (in june, when AttrsDescriptor was still in triton.compiler.compiler)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,788,398,033
|
[export] handle buffer/input mutations for joint-graph
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary: previous construction of GraphSignature output specs didn't consider buffer/user input mutations
Test Plan: test_experimental
Differential Revision: D68177409
| true
|
2,788,397,408
|
"GenericHOPVariable" / abstract out Dynamo support for HOPs
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: higher order operators",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
From HOP sync discussion (with @xmfan).
Idea 1: abstract out Dynamo support for HOPs
Some way to create a HOP where:
1) a user defines how to construct the inputs to each subgraph from the (args, kwargs)
2) using this, we can create a GenericHOPVariable that should be able to handle FX graphs as inputs. Dynamo can always speculate_subgraph on the fx graphs using the previous function
Idea 2: Some way to tag a HOP's subgraph as being "already proven safe by Dynamo"
1) If we see it has already been tagged, then there isn't a need to speculate it again.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @ydwu4 @bdhirsh @yf225
| true
|
2,788,396,184
|
Test of RST to MD
|
sekyondaMeta
|
closed
|
[
"Stale",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Test of rst to md conversion
DO NOT MERGE
| true
|
2,788,354,655
|
Something is fishy with discard_graph_changes
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 1
|
CONTRIBUTOR
|
Discovered with @yanboliang in https://github.com/pytorch/pytorch/pull/142830#discussion_r1913437378
cc @chauhang @penguinwu @ydwu4 @bdhirsh @yf225.
What's going on is:
1) we do a discard_graph_changes
2) then we do a speculate_subgraph, which gives us some lifted_freevars
The lifted_freevars map proxies from the discard_graph_changes's subtracer INSTEAD OF the outer subtracer. This is pretty unexpected and seems like a footgun for using discard_graph_changes.
I'm not sure what to do about this right now.
| true
|
2,788,325,314
|
Add non_c_binding torch functions to allowlist for AOTAutogradCache, confirm no special handlers for them
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144802
Differential Revision: [D68173093](https://our.internmc.facebook.com/intern/diff/D68173093/)
This diff allows any function in torch_non_c_binding_in_graph_functions to be safe to cache. These functions should be safe to cache because they are part of the torch API, and do not save global state (or if they do, dynamo creates unique guards around the constants they return).
A function that's allowed in a dynamo graph is safe to cache for AOTAutograd purposes as long as:
- It's functional (i.e. does not access global state);
- or its value is constant folded away (and guarded against by dynamo)
The tricky cases are functions that dynamo uses special handlers to track. These special handlers can sometimes close over stuff that's safe for dynamo locally, but isn't encoded anywhere when cached across processes. An example of this is `DTensor.from_local`, where various DeviceMesh information doesn't change in the same dynamo process, but can change across multiple processes. The handler for `DTensor.from_local` closes over these and dynamo creates a proxy for the function call. This is not safe to cache.
That said, most special handlers are in fact functional and safe. So I add a unit test to test_trace_rules.py that confirms that any function with special handlers in dynamo added to this list needs to be audited to be safe to cache.
The list of safe handlers there either:
- Don't access global state;
- Guard on global state; or
- Always returns a constant that never changes
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,788,292,455
|
[ONNX] Use python_dispatcher in type promotion
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 4
|
COLLABORATOR
|
Fix #143118
Use python_dispatcher in the type promotion pass to preserve symbolic shapes according to @angelayi 's suggestions. (Thanks!)
Tested locally. I wasn't able to create a minimal repro except for using the full model
| true
|
2,788,257,912
|
[fsdp2] maybe unreliable `set_unshard_in_backward(False)`
|
leonardo0lyj
|
closed
|
[
"oncall: distributed"
] | 1
|
NONE
|
Hey Andrew @awgu,
As a big fan of FSDP2, I keep posting improvement 😄
This flag ([`set_unshard_in_backward(False)`](https://github.com/pytorch/pytorch/blob/aa57f0c6637d4377d2d86d377fdf41840498960a/torch/distributed/fsdp/_fully_shard/_fully_shard.py#L408)) is super helpful to skip `unshard()` in backward pass especially for unused backward pass of certain `nn.Module`.
For valid example -- ZeRO3, these certain parameters are only used in forward, but not used in backward (skipped `unshard()`), so we keep those parameters in sharded state and keep gradient in None during backward pass, which saves allgather and reduce communication while being correct.
But in case of ZeRO2 or ZeRO++ (`reshard_after_forward=False or int`), this `set_unshard_in_backward(False)` results in misleading logic and potential bug. For example,
- these certain parameters are only used in forward, but not used in backward
- after forward, these parameters stay in unsharded state (ZeRO2) or [replicate, shard] state (ZeRO++)
- `set_unshard_in_backward(False)` skips `unshard` in pre-backward hook
- so in backward, these parameters stay in ZeRO2 and ZeRO++ state, which is misleading enough (i.e., by semantic, this flag `set_unshard_in_backward(False)` should mean that parameters stays in sharded state but in fact in ZeRO2 and ZeRO++ state)
As a suggestion, we can either ban ZeRO2 and ZeRO++ when using this flag`set_unshard_in_backward(False)` or figure out a more general way to support all ZeRO state.
How do you think 😁?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,788,239,172
|
Update ck
|
alugorey
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng",
"skip-pr-sanity-checks"
] | 22
|
CONTRIBUTOR
|
Updates the CK version and re-implements kernel generation
cc @albanD
| true
|
2,788,218,021
|
[MPSInductor] Add `min`/`max` to MetalExprPrinter
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144796
* #144795
* __->__ #144798
After that `GPUTests::test_avg_pool2d8_mps` and `GPUTests::test_avg_pool2d5_mps` passes
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,788,199,965
|
[AMD] De-noise tf32 warnings
|
xw285cornell
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary: This is way too noisy especially during unit tests. So just log once.
Test Plan: OSS CI. Tested on a unit test and now I only see one line (hard to notice :) ).
Differential Revision: D68167633
| true
|
2,788,190,330
|
Fix FakeTensor device creation for MPS
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144827
* #144826
* __->__ #144796
* #144795
By promoting torch.device("mps") to `torch.device("mps:0")`, but skipping `is_initialized` check, as MPS does not really support multi-GPU right now
This fixes `GPUTests.test_remove_no_ops_mps`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,788,190,250
|
[BE] Extend `test_remove_no_ops`
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144827
* #144826
* #144796
* __->__ #144795
----
- Use `is_dtype_supported` to skip dtype promotions portion of the test on unsupported device
- Extend it to use `torch.float16` so promotions could be checked there
- Implement `CpuInterface.is_bfloat16_supported` that returns true (which looks like the case, even if it's supported via emulation)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,788,167,847
|
[c10d][NCCL] Implement ncclCommInitRankScalable (merging #136789)
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ciflow/binaries_wheel"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144794
Try to land https://github.com/pytorch/pytorch/pull/136789/files on our end and fix any remaining issues.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,788,152,394
|
[CUDAGraph][Docs] add `cuda` to `torch.randn`
|
BoyuanFeng
|
closed
|
[
"Merged",
"module: cuda graphs",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Previous doc example created `torch.randn` tensor on cpu so CUDAGraph was skipped.
Fixes #144386
cc @mcarilli @ezyang @eellison @penguinwu
| true
|
2,788,147,618
|
massive number of runtime asserts can hamper compile times
|
bdhirsh
|
closed
|
[
"high priority",
"triage review",
"oncall: pt2",
"module: dynamic shapes"
] | 5
|
CONTRIBUTOR
|
internal xref: https://fb.workplace.com/groups/1075192433118967/permalink/1585006445470894/
We're spending ~2 hours in one job cranking out a few thousand runtime asserts. The relevant bit is this section of the logs:
```
# .... 4000 lines of runtime asserts
[trainers0]:[rank0]:I0114 06:02:10.688613 5302 torch/fx/experimental/symbolic_shapes.py:6328] [6/0] runtime_assert u0 + u1 + u10 + u100 + u1000 +
[trainers0]:[rank0]:I0114 06:02:25.990159 5302 torch/fx/experimental/symbolic_shapes.py:6328] [6/0] runtime_assert u0 + u1 + u10 + u100 + u1000 +
[trainers0]:[rank0]:I0114 06:02:41.910004 5302 torch/fx/experimental/symbolic_shapes.py:6328] [6/0] runtime_assert u0 + u1 + u10 + u100 + u1000 +
[trainers0]:[rank0]:I0114 06:02:57.359968 5302 torch/fx/experimental/symbolic_shapes.py:6328] [6/0] runtime_assert u0 + u1 + u10 + u100 + u1000 +
[trainers0]:[rank0]:I0114 06:03:12.918750 5302 torch/fx/experimental/symbolic_shapes.py:6328] [6/0] runtime_assert u0 + u1 + u10 + u100 + u1000 +
[trainers0]:[rank0]:I0114 06:03:28.610256 5302 torch/fx/experimental/symbolic_shapes.py:6328] [6/0] runtime_assert u0 + u1 + u10 + u100 + u1000 +
```
full paste here: P1712262166
you can see ~15 seconds between each log, indicating that once we have a few thousand runtime asserts, there is probably some quadratic behavior going on that slows compile times.
The runtime asserts themselves are also quite big and contain many symbols. The last one:
```
runtime_assert u0 + u1 + u10 + u100 + u1000 + u1001 + u1002 + u1003 + u1004 + u1005 + u1006 + u1007 + u1008 + u1009 + u101 + u1010 + u1011 + u1012 + u1013 + u1014 + u1015 + u1016 + u1017 + u1018 + u1019 + u102 + u1020 + u1021 + u1022 + u1023 + u1024 + u1025 + u1026 + u1027 + u1028 + u1029 + u103 + u1030 + u1031 + u1032 + u1033 + u1034 + u1035 + u1036 + u1037 + u1038 + u1039 + u104 + u1040 + u1041 + u1042 + u1043 + u1044 + u1045 + u1046 + u1047 + u1048 + u1049 + u105 + u1050 + u1051 + u1052 + u1053 + u1054 + u1055 + u1056 + u1057 + u1058 + u1059 + u106 + u1060 + u1061 + u1062 + u1063 + u1064 + u1065 + u1066 + u1067 + u1068 + u1069 + u107 + u1070 + u1071 + u1072 + u1073 + u1074 + u1075 + u1076 + u1077 + u1078 + u1079 + u108 + u1080 + u1081 + u1082 + u1083 + u1084 + u1085 + u1086 + u1087 + u1088 + u1089 + u109 + u1090 + u1091 + u1092 + u1093 + u1094 + u1095 + u1096 + u1097 + u1098 + u1099 + u11 + u110 + u1100 + u1101 + u1102 + u1103 + u1104 + u1105 + u1106 + u1107 + u1108 + u1109 + u111 + u1110 + u1111 + u1112 + u1113 + u1114 + u1115 + u1116 + u1117 + u1118 + u1119 + u112 + u1120 + u1121 + u1122 + u1123 + u1124 + u1125 + u1126 + u1127 + u1128 + u1129 + u113 + u1130 + u1131 + u1132 + u1133 + u1134 + u1135 + u1136 + u1137 + u1138 + u1139 + u114 + u1140 + u1141 + u1142 + u1143 + u1144 + u1145 + u1146 + u1147 + u1148 + u1149 + u115 + u1150 + u1151 + u1152 + u1153 + u1154 + u1155 + u1156 + u1157 + u1158 + u1159 + u116 + u1160 + u1161 + u1162 + u1163 + u1164 + u1165 + u117 + u118 + u119 + u12 + u120 + u121 + u122 + u123 + u124 + u125 + u126 + u127 + u128 + u129 + u13 + u130 + u131 + u132 + u133 + u134 + u135 + u136 + u137 + u138 + u139 + u14 + u140 + u141 + u142 + u143 + u144 + u145 + u146 + u147 + u148 + u149 + u15 + u150 + u151 + u152 + u153 + u154 + u155 + u156 + u157 + u158 + u159 + u16 + u160 + u161 + u162 + u163 + u164 + u165 + u166 + u167 + u168 + u169 + u17 + u170 + u171 + u172 + u173 + u174 + u175 + u176 + u177 + u178 + u179 + u18 + u180 + u181 + u182 + u183 + u184 + u185 + u186 + u187 + u188 + u189 + u19 + u190 + u191 + u192 + u193 + u194 + u195 + u196 + u197 + u198 + u199 + u2 + u20 + u200 + u201 + u202 + u203 + u204 + u205 + u206 + u207 + u208 + u209 + u21 + u210 + u211 + u212 + u213 + u214 + u215 + u216 + u217 + u218 + u219 + u22 + u220 + u221 + u222 + u223 + u224 + u225 + u226 + u227 + u228 + u229 + u23 + u230 + u231 + u232 + u233 + u234 + u235 + u236 + u237 + u238 + u239 + u24 + u240 + u241 + u242 + u243 + u244 + u245 + u246 + u247 + u248 + u249 + u25 + u250 + u251 + u252 + u253 + u254 + u255 + u256 + u257 + u258 + u259 + u26 + u260 + u261 + u262 + u263 + u264 + u265 + u266 + u267 + u268 + u269 + u27 + u270 + u271 + u272 + u273 + u274 + u275 + u276 + u277 + u278 + u279 + u28 + u280 + u281 + u282 + u283 + u284 + u285 + u286 + u287 + u288 + u289 + u29 + u290 + u291 + u292 + u293 + u294 + u295 + u296 + u297 + u298 + u299 + u3 + u30 + u300 + u301 + u302 + u303 + u304 + u305 + u306 + u307 + u308 + u309 + u31 + u310 + u311 + u312 + u313 + u314 + u315 + u316 + u317 + u318 + u319 + u32 + u320 + u321 + u322 + u323 + u324 + u325 + u326 + u327 + u328 + u329 + u33 + u330 + u331 + u332 + u333 + u334 + u335 + u336 + u337 + u338 + u339 + u34 + u340 + u341 + u342 + u343 + u344 + u345 + u346 + u347 + u348 + u349 + u35 + u350 + u351 + u352 + u353 + u354 + u355 + u356 + u357 + u358 + u359 + u36 + u360 + u361 + u362 + u363 + u364 + u365 + u366 + u367 + u368 + u369 + u37 + u370 + u371 + u372 + u373 + u374 + u375 + u376 + u377 + u378 + u379 + u38 + u380 + u381 + u382 + u383 + u384 + u385 + u386 + u387 + u388 + u389 + u39 + u390 + u391 + u392 + u393 + u394 + u395 + u396 + u397 + u398 + u399 + u4 + u40 + u400 + u401 + u402 + u403 + u404 + u405 + u406 + u407 + u408 + u409 + u41 + u410 + u411 + u412 + u413 + u414 + u415 + u416 + u417 + u418 + u419 + u42 + u420 + u421 + u422 + u423 + u424 + u425 + u426 + u427 + u428 + u429 + u43 + u430 + u431 + u432 + u433 + u434 + u435 + u436 + u437 + u438 + u439 + u44 + u440 + u441 + u442 + u443 + u444 + u445 + u446 + u447 + u448 + u449 + u45 + u450 + u451 + u452 + u453 + u454 + u455 + u456 + u457 + u458 + u459 + u46 + u460 + u461 + u462 + u463 + u464 + u465 + u466 + u467 + u468 + u469 + u47 + u470 + u471 + u472 + u473 + u474 + u475 + u476 + u477 + u478 + u479 + u48 + u480 + u481 + u482 + u483 + u484 + u485 + u486 + u487 + u488 + u489 + u49 + u490 + u491 + u492 + u493 + u494 + u495 + u496 + u497 + u498 + u499 + u5 + u50 + u500 + u501 + u502 + u503 + u504 + u505 + u506 + u507 + u508 + u509 + u51 + u510 + u511 + u512 + u513 + u514 + u515 + u516 + u517 + u518 + u519 + u52 + u520 + u521 + u522 + u523 + u524 + u525 + u526 + u527 + u528 + u529 + u53 + u530 + u531 + u532 + u533 + u534 + u535 + u536 + u537 + u538 + u539 + u54 + u540 + u541 + u542 + u543 + u544 + u545 + u546 + u547 + u548 + u549 + u55 + u550 + u551 + u552 + u553 + u554 + u555 + u556 + u557 + u558 + u559 + u56 + u560 + u561 + u562 + u563 + u564 + u565 + u566 + u567 + u568 + u569 + u57 + u570 + u571 + u572 + u573 + u574 + u575 + u576 + u577 + u578 + u579 + u58 + u580 + u581 + u582 + u583 + u584 + u585 + u586 + u587 + u588 + u589 + u59 + u590 + u591 + u592 + u593 + u594 + u595 + u596 + u597 + u598 + u599 + u6 + u60 + u600 + u601 + u602 + u603 + u604 + u605 + u606 + u607 + u608 + u609 + u61 + u610 + u611 + u612 + u613 + u614 + u615 + u616 + u617 + u618 + u619 + u62 + u620 + u621 + u622 + u623 + u624 + u625 + u626 + u627 + u628 + u629 + u63 + u630 + u631 + u632 + u633 + u634 + u635 + u636 + u637 + u638 + u639 + u64 + u640 + u641 + u642 + u643 + u644 + u645 + u646 + u647 + u648 + u649 + u65 + u650 + u651 + u652 + u653 + u654 + u655 + u656 + u657 + u658 + u659 + u66 + u660 + u661 + u662 + u663 + u664 + u665 + u666 + u667 + u668 + u669 + u67 + u670 + u671 + u672 + u673 + u674 + u675 + u676 + u677 + u678 + u679 + u68 + u680 + u681 + u682 + u683 + u684 + u685 + u686 + u687 + u688 + u689 + u69 + u690 + u691 + u692 + u693 + u694 + u695 + u696 + u697 + u698 + u699 + u7 + u70 + u700 + u701 + u702 + u703 + u704 + u705 + u706 + u707 + u708 + u709 + u71 + u710 + u711 + u712 + u713 + u714 + u715 + u716 + u717 + u718 + u719 + u72 + u720 + u721 + u722 + u723 + u724 + u725 + u726 + u727 + u728 + u729 + u73 + u730 + u731 + u732 + u733 + u734 + u735 + u736 + u737 + u738 + u739 + u74 + u740 + u741 + u742 + u743 + u744 + u745 + u746 + u747 + u748 + u749 + u75 + u750 + u751 + u752 + u753 + u754 + u755 + u756 + u757 + u758 + u759 + u76 + u760 + u761 + u762 + u763 + u764 + u765 + u766 + u767 + u768 + u769 + u77 + u770 + u771 + u772 + u773 + u774 + u775 + u776 + u777 + u778 + u779 + u78 + u780 + u781 + u782 + u783 + u784 + u785 + u786 + u787 + u788 + u789 + u79 + u790 + u791 + u792 + u793 + u794 + u795 + u796 + u797 + u798 + u799 + u8 + u80 + u800 + u801 + u802 + u803 + u804 + u805 + u806 + u807 + u808 + u809 + u81 + u810 + u811 + u812 + u813 + u814 + u815 + u816 + u817 + u818 + u819 + u82 + u820 + u821 + u822 + u823 + u824 + u825 + u826 + u827 + u828 + u829 + u83 + u830 + u831 + u832 + u833 + u834 + u835 + u836 + u837 + u838 + u839 + u84 + u840 + u841 + u842 + u843 + u844 + u845 + u846 + u847 + u848 + u849 + u85 + u850 + u851 + u852 + u853 + u854 + u855 + u856 + u857 + u858 + u859 + u86 + u860 + u861 + u862 + u863 + u864 + u865 + u866 + u867 + u868 + u869 + u87 + u870 + u871 + u872 + u873 + u874 + u875 + u876 + u877 + u878 + u879 + u88 + u880 + u881 + u882 + u883 + u884 + u885 + u886 + u887 + u888 + u889 + u89 + u890 + u891 + u892 + u893 + u894 + u895 + u896 + u897 + u898 + u899 + u9 + u90 + u900 + u901 + u902 + u903 + u904 + u905 + u906 + u907 + u908 + u909 + u91 + u910 + u911 + u912 + u913 + u914 + u915 + u916 + u917 + u918 + u919 + u92 + u920 + u921 + u922 + u923 + u924 + u925 + u926 + u927 + u928 + u929 + u93 + u930 + u931 + u932 + u933 + u934 + u935 + u936 + u937 + u938 + u939 + u94 + u940 + u941 + u942 + u943 + u944 + u945 + u946 + u947 + u948 + u949 + u95 + u950 + u951 + u952 + u953 + u954 + u955 + u956 + u957 + u958 + u959 + u96 + u960 + u961 + u962 + u963 + u964 + u965 + u966 + u967 + u968 + u969 + u97 + u970 + u971 + u972 + u973 + u974 + u975 + u976 + u977 + u978 + u979 + u98 + u980 + u981 + u982 + u983 + u984 + u985 + u986 + u987 + u988 + u989 + u99 + u990 + u991 + u992 + u993 + u994 + u995 + u996 + u997 + u998 + u999 <= 487233 - u1166 [guard added] prob_user_split = torch.split(class_prob, num_objs_per_user.tolist()) # <torch_package_0>.dper3_models/ads_ranking/model_impl/sparse_nn/hiearchical_pre_norm_pma.py:149 in get_topk (_ops.py:801 in decompose), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="u0 + u1 + u10 + u100 + u1000 + u1001 + u1002 + u1003 + u1004 + u1005 + u1006 + u1007 + u1008 + u1009 + u101 + u1010 + u1011 + u1012 + u1013 + u1014 + u1015 + u1016 + u1017 + u1018 + u1019 + u102 + u1020 + u1021 + u1022 + u1023 + u1024 + u1025 + u1026 + u1027 + u1028 + u1029 + u103 + u1030 + u1031 + u1032 + u1033 + u1034 + u1035 + u1036 + u1037 + u1038 + u1039 + u104 + u1040 + u1041 + u1042 + u1043 + u1044 + u1045 + u1046 + u1047 + u1048 + u1049 + u105 + u1050 + u1051 + u1052 + u1053 + u1054 + u1055 + u1056 + u1057 + u1058 + u1059 + u106 + u1060 + u1061 + u1062 + u1063 + u1064 + u1065 + u1066 + u1067 + u1068 + u1069 + u107 + u1070 + u1071 + u1072 + u1073 + u1074 + u1075 + u1076 + u1077 + u1078 + u1079 + u108 + u1080 + u1081 + u1082 + u1083 + u1084 + u1085 + u1086 + u1087 + u1088 + u1089 + u109 + u1090 + u1091 + u1092 + u1093 + u1094 + u1095 + u1096 + u1097 + u1098 + u1099 + u11 + u110 + u1100 + u1101 + u1102 + u1103 + u1104 + u1105 + u1106 + u1107 + u1108 + u1109 + u111 + u1110 + u1111 + u1112 + u1113 + u1114 + u1115 + u1116 + u1117 + u1118 + u1119 + u112 + u1120 + u1121 + u1122 + u1123 + u1124 + u1125 + u1126 + u1127 + u1128 + u1129 + u113 + u1130 + u1131 + u1132 + u1133 + u1134 + u1135 + u1136 + u1137 + u1138 + u1139 + u114 + u1140 + u1141 + u1142 + u1143 + u1144 + u1145 + u1146 + u1147 + u1148 + u1149 + u115 + u1150 + u1151 + u1152 + u1153 + u1154 + u1155 + u1156 + u1157 + u1158 + u1159 + u116 + u1160 + u1161 + u1162 + u1163 + u1164 + u1165 + u117 + u118 + u119 + u12 + u120 + u121 + u122 + u123 + u124 + u125 + u126 + u127 + u128 + u129 + u13 + u130 + u131 + u132 + u133 + u134 + u135 + u136 + u137 + u138 + u139 + u14 + u140 + u141 + u142 + u143 + u144 + u145 + u146 + u147 + u148 + u149 + u15 + u150 + u151 + u152 + u153 + u154 + u155 + u156 + u157 + u158 + u159 + u16 + u160 + u161 + u162 + u163 + u164 + u165 + u166 + u167 + u168 + u169 + u17 + u170 + u171 + u172 + u173 + u174 + u175 + u176 + u177 + u178 + u179 + u18 + u180 + u181 + u182 + u183 + u184 + u185 + u186 + u187 + u188 + u189 + u19 + u190 + u191 + u192 + u193 + u194 + u195 + u196 + u197 + u198 + u199 + u2 + u20 + u200 + u201 + u202 + u203 + u204 + u205 + u206 + u207 + u208 + u209 + u21 + u210 + u211 + u212 + u213 + u214 + u215 + u216 + u217 + u218 + u219 + u22 + u220 + u221 + u222 + u223 + u224 + u225 + u226 + u227 + u228 + u229 + u23 + u230 + u231 + u232 + u233 + u234 + u235 + u236 + u237 + u238 + u239 + u24 + u240 + u241 + u242 + u243 + u244 + u245 + u246 + u247 + u248 + u249 + u25 + u250 + u251 + u252 + u253 + u254 + u255 + u256 + u257 + u258 + u259 + u26 + u260 + u261 + u262 + u263 + u264 + u265 + u266 + u267 + u268 + u269 + u27 + u270 + u271 + u272 + u273 + u274 + u275 + u276 + u277 + u278 + u279 + u28 + u280 + u281 + u282 + u283 + u284 + u285 + u286 + u287 + u288 + u289 + u29 + u290 + u291 + u292 + u293 + u294 + u295 + u296 + u297 + u298 + u299 + u3 + u30 + u300 + u301 + u302 + u303 + u304 + u305 + u306 + u307 + u308 + u309 + u31 + u310 + u311 + u312 + u313 + u314 + u315 + u316 + u317 + u318 + u319 + u32 + u320 + u321 + u322 + u323 + u324 + u325 + u326 + u327 + u328 + u329 + u33 + u330 + u331 + u332 + u333 + u334 + u335 + u336 + u337 + u338 + u339 + u34 + u340 + u341 + u342 + u343 + u344 + u345 + u346 + u347 + u348 + u349 + u35 + u350 + u351 + u352 + u353 + u354 + u355 + u356 + u357 + u358 + u359 + u36 + u360 + u361 + u362 + u363 + u364 + u365 + u366 + u367 + u368 + u369 + u37 + u370 + u371 + u372 + u373 + u374 + u375 + u376 + u377 + u378 + u379 + u38 + u380 + u381 + u382 + u383 + u384 + u385 + u386 + u387 + u388 + u389 + u39 + u390 + u391 + u392 + u393 + u394 + u395 + u396 + u397 + u398 + u399 + u4 + u40 + u400 + u401 + u402 + u403 + u404 + u405 + u406 + u407 + u408 + u409 + u41 + u410 + u411 + u412 + u413 + u414 + u415 + u416 + u417 + u418 + u419 + u42 + u420 + u421 + u422 + u423 + u424 + u425 + u426 + u427 + u428 + u429 + u43 + u430 + u431 + u432 + u433 + u434 + u435 + u436 + u437 + u438 + u439 + u44 + u440 + u441 + u442 + u443 + u444 + u445 + u446 + u447 + u448 + u449 + u45 + u450 + u451 + u452 + u453 + u454 + u455 + u456 + u457 + u458 + u459 + u46 + u460 + u461 + u462 + u463 + u464 + u465 + u466 + u467 + u468 + u469 + u47 + u470 + u471 + u472 + u473 + u474 + u475 + u476 + u477 + u478 + u479 + u48 + u480 + u481 + u482 + u483 + u484 + u485 + u486 + u487 + u488 + u489 + u49 + u490 + u491 + u492 + u493 + u494 + u495 + u496 + u497 + u498 + u499 + u5 + u50 + u500 + u501 + u502 + u503 + u504 + u505 + u506 + u507 + u508 + u509 + u51 + u510 + u511 + u512 + u513 + u514 + u515 + u516 + u517 + u518 + u519 + u52 + u520 + u521 + u522 + u523 + u524 + u525 + u526 + u527 + u528 + u529 + u53 + u530 + u531 + u532 + u533 + u534 + u535 + u536 + u537 + u538 + u539 + u54 + u540 + u541 + u542 + u543 + u544 + u545 + u546 + u547 + u548 + u549 + u55 + u550 + u551 + u552 + u553 + u554 + u555 + u556 + u557 + u558 + u559 + u56 + u560 + u561 + u562 + u563 + u564 + u565 + u566 + u567 + u568 + u569 + u57 + u570 + u571 + u572 + u573 + u574 + u575 + u576 + u577 + u578 + u579 + u58 + u580 + u581 + u582 + u583 + u584 + u585 + u586 + u587 + u588 + u589 + u59 + u590 + u591 + u592 + u593 + u594 + u595 + u596 + u597 + u598 + u599 + u6 + u60 + u600 + u601 + u602 + u603 + u604 + u605 + u606 + u607 + u608 + u609 + u61 + u610 + u611 + u612 + u613 + u614 + u615 + u616 + u617 + u618 + u619 + u62 + u620 + u621 + u622 + u623 + u624 + u625 + u626 + u627 + u628 + u629 + u63 + u630 + u631 + u632 + u633 + u634 + u635 + u636 + u637 + u638 + u639 + u64 + u640 + u641 + u642 + u643 + u644 + u645 + u646 + u647 + u648 + u649 + u65 + u650 + u651 + u652 + u653 + u654 + u655 + u656 + u657 + u658 + u659 + u66 + u660 + u661 + u662 + u663 + u664 + u665 + u666 + u667 + u668 + u669 + u67 + u670 + u671 + u672 + u673 + u674 + u675 + u676 + u677 + u678 + u679 + u68 + u680 + u681 + u682 + u683 + u684 + u685 + u686 + u687 + u688 + u689 + u69 + u690 + u691 + u692 + u693 + u694 + u695 + u696 + u697 + u698 + u699 + u7 + u70 + u700 + u701 + u702 + u703 + u704 + u705 + u706 + u707 + u708 + u709 + u71 + u710 + u711 + u712 + u713 + u714 + u715 + u716 + u717 + u718 + u719 + u72 + u720 + u721 + u722 + u723 + u724 + u725 + u726 + u727 + u728 + u729 + u73 + u730 + u731 + u732 + u733 + u734 + u735 + u736 + u737 + u738 + u739 + u74 + u740 + u741 + u742 + u743 + u744 + u745 + u746 + u747 + u748 + u749 + u75 + u750 + u751 + u752 + u753 + u754 + u755 + u756 + u757 + u758 + u759 + u76 + u760 + u761 + u762 + u763 + u764 + u765 + u766 + u767 + u768 + u769 + u77 + u770 + u771 + u772 + u773 + u774 + u775 + u776 + u777 + u778 + u779 + u78 + u780 + u781 + u782 + u783 + u784 + u785 + u786 + u787 + u788 + u789 + u79 + u790 + u791 + u792 + u793 + u794 + u795 + u796 + u797 + u798 + u799 + u8 + u80 + u800 + u801 + u802 + u803 + u804 + u805 + u806 + u807 + u808 + u809 + u81 + u810 + u811 + u812 + u813 + u814 + u815 + u816 + u817 + u818 + u819 + u82 + u820 + u821 + u822 + u823 + u824 + u825 + u826 + u827 + u828 + u829 + u83 + u830 + u831 + u832 + u833 + u834 + u835 + u836 + u837 + u838 + u839 + u84 + u840 + u841 + u842 + u843 + u844 + u845 + u846 + u847 + u848 + u849 + u85 + u850 + u851 + u852 + u853 + u854 + u855 + u856 + u857 + u858 + u859 + u86 + u860 + u861 + u862 + u863 + u864 + u865 + u866 + u867 + u868 + u869 + u87 + u870 + u871 + u872 + u873 + u874 + u875 + u876 + u877 + u878 + u879 + u88 + u880 + u881 + u882 + u883 + u884 + u885 + u886 + u887 + u888 + u889 + u89 + u890 + u891 + u892 + u893 + u894 + u895 + u896 + u897 + u898 + u899 + u9 + u90 + u900 + u901 + u902 + u903 + u904 + u905 + u906 + u907 + u908 + u909 + u91 + u910 + u911 + u912 + u913 + u914 + u915 + u916 + u917 + u918 + u919 + u92 + u920 + u921 + u922 + u923 + u924 + u925 + u926 + u927 + u928 + u929 + u93 + u930 + u931 + u932 + u933 + u934 + u935 + u936 + u937 + u938 + u939 + u94 + u940 + u941 + u942 + u943 + u944 + u945 + u946 + u947 + u948 + u949 + u95 + u950 + u951 + u952 + u953 + u954 + u955 + u956 + u957 + u958 + u959 + u96 + u960 + u961 + u962 + u963 + u964 + u965 + u966 + u967 + u968 + u969 + u97 + u970 + u971 + u972 + u973 + u974 + u975 + u976 + u977 + u978 + u979 + u98 + u980 + u981 + u982 + u983 + u984 + u985 + u986 + u987 + u988 + u989 + u99 + u990 + u991 + u992 + u993 + u994 + u995 + u996 + u997 + u998 + u999 <= 487233 - u1166"
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bobrenjc93
| true
|
2,788,126,685
|
fix as_bool serde
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144791
Differential Revision: [D68167701](https://our.internmc.facebook.com/intern/diff/D68167701/)
| true
|
2,788,117,169
|
[torch][ao][EASY] Change print to log in numeric debugger to avoid large output
|
dulinriley
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: AO frontend"
] | 4
|
CONTRIBUTOR
|
Summary:
This print statement was spewing a bunch of data in logs by default, but it should
be silenceable.
Use `log.debug` instead.
Differential Revision: D68166823
| true
|
2,788,109,990
|
[c10d][ez] Add comments to the end of Macro for better readability
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144794
* __->__ #144789
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,788,073,242
|
Avoid the builtin `numbers` module.
|
randolf-scholz
|
closed
|
[
"module: distributions",
"module: typing",
"triaged",
"actionable"
] | 5
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Currently, torch uses the builtin [`numbers`](https://docs.python.org/3/library/numbers.html) module [in a few places (only ~40 hits)](https://github.com/search?q=repo%3Apytorch%2Fpytorch%20%2Ffrom%20numbers%20import%7Cimport%20numbers%2F&type=code). However, the `numbers` module is problematic for multiple reasons:
1. The `numbers` module is incompatible with type annotations (see https://github.com/python/mypy/issues/3186, example: [mypy-playground](https://mypy-play.net/?mypy=latest&python=3.12&gist=1f7509eac367068ebd544dac9f515642)).
- In particular, annotating function arguments with `numbers.Number`, `numbers.Real`, etc. is a terrible idea.
- Using runtime behavior like `isinstance(x, Number)` forces us to add `type: ignore` comments inside the else branch.
- In particular, this is a blocker to annotating the `torch.distributions` module (#144196), since this is the place where most of the uses of `numbers` are found, see: https://github.com/pytorch/pytorch/pull/144197#discussion_r1903324769
2. Since it's just an abstract base class, it requires users to do `Number.register(my_number_type)` to ensure `isinstance` succeeds.
3. Internally, `torch.tensor` doesn't seem to care if something is a `numbers.Number`, in fact, the supported types appear to be
- symbolic torch scalars `torch.SymBool`, `torch.SymInt` and `torch.SymFloat`
- `numpy` scalars `numpy.int32`, `numpy.int64`, `numpy.float32`, etc.
- python built-in scalars `bool`, `int`, `float`, `complex`
- things that can be converted to built-in scalars via `__bool__`, `__int__`, `__index__`, `__float__` or `__complex__` (requires specifying `dtype`)
(see [`/torch/csrc/utils/tensor_new.cpp`](https://github.com/pytorch/pytorch/blob/60d2e32fa4f49208a4f0389c6bb74141534e60db/torch/csrc/utils/tensor_new.cpp#L123-L202) and [`torch/_refs/__init__.py`](https://github.com/pytorch/pytorch/blob/60d2e32fa4f49208a4f0389c6bb74141534e60db/torch/_refs/__init__.py#L6475-L6521))
<details><summary> demo </summary>
```python
import torch
from numbers import Real
class MyReal(Real):
"""Simple wrapper class for float."""
__slots__ = ("val")
def __float__(self): return self.val.__float__()
def __complex__(self): return self.val.__complex__()
def __init__(self, x) -> None:
self.val = float(x)
@property
def real(self): return MyReal(self.val.real)
@property
def imag(self): return MyReal(self.val.imag)
def conjugate(self): return MyReal(self.val.conjugate())
def __abs__(self): return MyReal(self.val.__abs__())
def __neg__(self): return MyReal(self.val.__neg__())
def __pos__(self): return MyReal(self.val.__pos__())
def __trunc__(self): return MyReal(self.val.__trunc__())
def __floor__(self): return MyReal(self.val.__floor__())
def __ceil__(self): return MyReal(self.val.__ceil__())
def __round__(self, ndigits=None): return MyReal(self.val.__round__(ndigits=ndigits))
def __eq__(self, other): return MyReal(self.val.__eq__(other))
def __lt__(self, other): return MyReal(self.val.__lt__(other))
def __le__(self, other): return MyReal(self.val.__le__(other))
def __add__(self, other): return MyReal(self.val.__add__(other))
def __radd__(self, other): return MyReal(self.val.__radd__(other))
def __mul__(self, other): return MyReal(self.val.__mul__(other))
def __rmul__(self, other): return MyReal(self.val.__rmul__(other))
def __truediv__(self, other): return MyReal(self.val.__truediv__(other))
def __rtruediv__(self, other): return MyReal(self.val.__rtruediv__(other))
def __floordiv__(self, other): return MyReal(self.val.__floordiv__(other))
def __rfloordiv__(self, other): return MyReal(self.val.__rfloordiv__(other))
def __mod__(self, other): return MyReal(self.val.__mod__(other))
def __rmod__(self, other): return MyReal(self.val.__rmod__(other))
def __pow__(self, exponent): return MyReal(self.val.__pow__(exponent))
def __rpow__(self, base): return MyReal(self.val.__rmod__(base))
class Pi:
def __float__(self) -> float: return 3.14
torch.tensor(MyReal(3.14), dtype=float) # ✅
torch.tensor(Pi(), dtype=float) # ✅
torch.tensor(MyReal(3.14)) # ❌ Runtimerror: Could not infer dtype of MyReal
torch.tensor(Pi()) # ❌ Runtimerror: Could not infer dtype of Pi
```
</details>
### Alternatives
There are 3 main alternatives:
1. Use `Union` type of the supported types (`tuple` for python 3.9). `torch` already provides for example like [`torch.types.Number`](https://github.com/pytorch/pytorch/blob/ec1c3ab3b28143c6e0392352c1c62ae0513ba024/torch/types.py#L64) and [`torch._prims_common.Number`](https://github.com/pytorch/pytorch/blob/ec1c3ab3b28143c6e0392352c1c62ae0513ba024/torch/_prims_common/__init__.py#L65)
2. Use builtin `Protocol` types like [`typing.SupportsFloat`](https://docs.python.org/3/library/typing.html#typing.SupportsFloat)
- The main disadvantage here is that `Tensor`, since it implements `__float__`, is a `SupportsFloat` itself, which could require changing some exisiting if-else tests.
3. Provide a custom `Protocol` type.
### Additional context
<details> <summary> One concern could be speed of `isinstance(x, Number)`, below is a comparison between the approaches. </summary>
```python
import torch
from numbers import Real
import numpy as np
from typing import SupportsFloat
T1 = Real
T2 = SupportsFloat
T3 = (bool, int, float, complex, torch.SymBool, torch.SymInt, torch.SymFloat, np.number)
print("Testing float")
x = 3.14
%timeit isinstance(x, T1) # 237 ns ± 0.374 ns
%timeit isinstance(x, T2) # 214 ns ± 0.325 ns
%timeit isinstance(x, T3) # 35 ns ± 0.844 ns
print("Testing np.float32")
y = np.float32(3.14)
%timeit isinstance(y, T1) # 106 ns ± 2.3 ns
%timeit isinstance(y, T2) # 223 ns ± 2.33 ns
%timeit isinstance(y, T3) # 104 ns ± 0.52 ns
print("Testing Tensor")
z = torch.tensor(3.14)
%timeit isinstance(z, T1) # 117 ns ± 0.962 ns
%timeit isinstance(z, T2) # 226 ns ± 0.508 ns
%timeit isinstance(z, T3) # 99.1 ns ± 0.699 ns
print("Testing string (non-match)")
w = "3.14"
%timeit isinstance(w, T1) # 114 ns ± 1.47 ns
%timeit isinstance(w, T2) # 2.21 μs ± 79.2 ns
%timeit isinstance(w, T3) # 95 ns ± 0.887 ns
```
One can see that `isinstance(val, SupportsFloat)` is roughly twice as slow as `isinstance(val, Real)` for a positive, but can be a lot slower for a negative. The `Union` can be a lot faster, but the speed depends on the order of the members (if we put `float` last, the first run takes ~90ns, since the argument is checked sequentially against the provided types).
</details>
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @malfet @xuzhao9 @gramster
| true
|
2,788,028,918
|
torch.compile() within TorchDispatchMode always causes an unknown guard failure.
|
galv
|
open
|
[
"triaged",
"module: __torch_dispatch__",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 5
|
COLLABORATOR
|
### 🐛 Describe the bug
When I run torch.compile() under an "infra" TorchDispatchMode, it seems that a recompile always happens, but I don't know what guard is failing:
```
import torch
from torch.overrides import TorchFunctionMode
from torch.utils._python_dispatch import TorchDispatchMode
from torch._dynamo import config
class MyFunctionMode(TorchFunctionMode):
def __torch_function__(self, func, types, args, kwargs=None):
return func(*args, **(kwargs or {}))
class MyDispatchMode(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args, kwargs=None):
return func(*args, **(kwargs or {}))
@classmethod
def is_infra_mode(cls):
return True
def f(x, y):
return x @ y
x = torch.ones(10, device="cuda")
mode = MyFunctionMode()
f_compiled = torch.compile(f, backend="eager")
for i in range(2):
if i == 0:
config.error_on_recompile = False
if i == 1:
config.error_on_recompile = True
with mode:
f_compiled(x, x)
mode = MyDispatchMode()
for i in range(2):
if i == 0:
config.error_on_recompile = False
if i == 1:
config.error_on_recompile = True
with mode:
f_compiled(x, x)
```
Running the above script on top-of-tree pytorch gives the following error message:
```
I0114 18:25:17.922947 2151712 torch/_dynamo/utils.py:1521] [0/0] ChromiumEventLogger initialized with id eeb788f2-8d2b-4de5-adf7-22df55d8491d
I0114 18:25:17.924832 2151712 torch/_dynamo/symbolic_convert.py:2744] [0/0] Step 1: torchdynamo start tracing f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:17.925518 2151712 torch/fx/experimental/symbolic_shapes.py:3243] [0/0] create_env
I0114 18:25:17.946520 2151712 torch/_dynamo/symbolic_convert.py:3066] [0/0] Step 1: torchdynamo done tracing f (RETURN_VALUE)
I0114 18:25:17.950973 2151712 torch/_dynamo/output_graph.py:1460] [0/0] Step 2: calling compiler function eager
I0114 18:25:17.951271 2151712 torch/_dynamo/output_graph.py:1465] [0/0] Step 2: done compiler function eager
I0114 18:25:17.954654 2151712 torch/fx/experimental/symbolic_shapes.py:4623] [0/0] produce_guards
I0114 18:25:17.956163 2151712 torch/_dynamo/pgo.py:647] [0/0] put_code_state: no cache key, skipping
I0114 18:25:17.956523 2151712 torch/_dynamo/convert_frame.py:1078] [0/0] run_gc_after_compile: running gc
I0114 18:25:17.984054 2151712 torch/_dynamo/symbolic_convert.py:2744] [0/1] Step 1: torchdynamo start tracing f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:17.984482 2151712 torch/fx/experimental/symbolic_shapes.py:3243] [0/1] create_env
I0114 18:25:17.988030 2151712 torch/_dynamo/symbolic_convert.py:3066] [0/1] Step 1: torchdynamo done tracing f (RETURN_VALUE)
I0114 18:25:17.989872 2151712 torch/_dynamo/output_graph.py:1460] [0/1] Step 2: calling compiler function eager
I0114 18:25:17.990141 2151712 torch/_dynamo/output_graph.py:1465] [0/1] Step 2: done compiler function eager
I0114 18:25:17.992269 2151712 torch/fx/experimental/symbolic_shapes.py:4623] [0/1] produce_guards
I0114 18:25:17.993348 2151712 torch/_dynamo/pgo.py:647] [0/1] put_code_state: no cache key, skipping
I0114 18:25:17.993675 2151712 torch/_dynamo/convert_frame.py:1078] [0/1] run_gc_after_compile: running gc
Traceback (most recent call last):
File "/home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py", line 44, in <module>
f_compiled(x, x)
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/convert_frame.py", line 1422, in __call__
return self._torchdynamo_orig_callable(
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/convert_frame.py", line 1203, in __call__
result = self._inner_convert(
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/convert_frame.py", line 569, in __call__
return _compile(
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/convert_frame.py", line 920, in _compile
recompile_reasons = get_and_maybe_log_recompilation_reason(
File "/home/dgalvez/code/asr/pytorch-4/torch/_dynamo/guards.py", line 2780, in get_and_maybe_log_recompilation_reason
raise exc.RecompileError(message)
torch._dynamo.exc.RecompileError: Recompiling function f in /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
triggered by the following guard failure(s):
- 0/1:
- 0/0: ___check_torch_function_mode_stack()
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] ]
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] TorchDynamo compilation metrics:
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] Function, Runtimes (s)
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] _compile.compile_inner, 0.0418
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] OutputGraph.call_user_compiler, 0.0016
I0114 18:25:18.005912 2151712 torch/_dynamo/utils.py:751] gc, 0.0016
```
You can see from this section that three compiles happen: the first compile under MyTorchFunctionMode, the first compile under MyTorchDispatchMode, and the second compile under MyTorchDispatchMode:
```
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] * f /home/dgalvez/code/asr/pytorch-4/repros/dispatch_mode_recompile.py:19
I0114 18:25:18.005497 2151712 torch/_dynamo/eval_frame.py:398] ]
```
@mlazos since you worked on #131828, do you know if this is expected? For reasons related to #140979: https://github.com/pytorch/pytorch/pull/140979/files#r1877221096
I realize just after having linke dto that that there is a brief answer to my question, but I will make this issue nonetheless for documentation purposes.
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0a0+gitcd1b9e4
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L40S
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 25
On-line CPU(s) list: 0-24
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9454 48-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 25
Stepping: 1
BogoMIPS: 5491.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor fsrm flush_l1d
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 800 KiB (25 instances)
L1i cache: 800 KiB (25 instances)
L2 cache: 25 MiB (25 instances)
L3 cache: 800 MiB (25 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-24
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+gitcd1b9e4
[pip3] triton==3.2.0+git35c6c7c6
[conda] numpy 1.22.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0a0+gitcd1b9e4 dev_0 <develop>
[conda] triton 3.2.0+git35c6c7c6 pypi_0 pypi
```
cc @Chillee @ezyang @zou3519 @albanD @samdow @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,788,005,968
|
test
|
angelayi
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,787,960,566
|
speculation_log: Raise a unique error for divergence issues
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144785
This is primarily sent for discussion and to see what tests fail due to
this. The idea is that rather than capturing this as a regex on the
fail_reason, just give it a unique failure type
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,787,959,498
|
symbolic_convert: Don't fail when we hit a undefined name
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144784
We're using a python builtin NameError here,
instead of throwing a Unsupported exception. This causes the
NameError to get wrapped in a InternalTorchDynamoError
instead of just causing a graph break, and letting the user code fail
directly.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,787,944,513
|
[codemod] Remove unused-variable in caffe2/aten/src/ATen/native/quantized/cpu/fbgemm_utils.cpp +1
|
r-barnes
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: cpp",
"topic: improvements",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Summary:
LLVM-15 has a warning `-Wunused-variable` which we treat as an error because it's so often diagnostic of a code issue. Unused variables can compromise readability or, worse, performance.
This diff either (a) removes an unused variable and, possibly, it's associated code or (b) qualifies the variable with `[[maybe_unused]]`.
- If you approve of this diff, please use the "Accept & Ship" button :-)
Test Plan: Sandcastle
Reviewed By: palmje
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,787,897,138
|
Fix triton masked loading for non-block tl.loads
|
isuruf
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"ci-no-td"
] | 17
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144782
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,787,859,796
|
[FSDP2] Make post-backward condition more robust
|
awgu
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (fsdp2)"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144781
Fixes https://github.com/pytorch/pytorch/issues/144755
cc @H-Huang @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,787,773,274
|
[AOTI] Mark run_impl as optnone
|
desertfire
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144780
Summary: Optimizing a large cpp wrrapper code can be really slow. Using this PR to measure the impact of setting optnone.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @chauhang @aakhundov @BoyuanFeng
| true
|
2,787,710,102
|
Adding Infiniband to RDZV Backend for optimal torch run training
|
ArkashJ
|
open
|
[
"oncall: distributed",
"module: docs"
] | 10
|
NONE
|
### 📚 The doc issue
The documentation can include more information on optimally using infiniband for running ML trainings using torchrun. It would be helpful to add the bash commands for users to see how to set the RDZV host to be the the infiniband url. *For Nvidia GPUs*
**Infiniband URL**
One can run `ifconfig` and parse the list to look for a key starting with `ib` (for example ib0 with inet of 10.1.0.12).
**Mellanox Ports and Link Layer**
Run `ibstat` to get the list of mellanox ports and the link layer they are using.
To verify the Mellanox ports being active or not, run `ibv_devinfo`.
**Ping the port**
If you do not have RDMA over Infiniband protocol set, run `ping 10.x.0.y` to see if you can reach the infiniband connection. If you have RDMA installed you can verify the write latency by using `ib_write_lat 10.x.0.y` (to verify write run `ib_write_bw 10.x.0.y`).
**Understanding Mellanox configs**
Section 7.7.1 of the A100 manual states(https://docs.nvidia.com/dgx/pdf/dgxa100-user-guide.pdf) using mlx start and run `sudo mlxconfig -e query | egrep -e Device\|LINK_TYPE` to see if you're using Infiniband or Ethernet.
Run `sudo mlxconfig -y -d <device-path> set LINK_TYPE_P1=<config-number>` to change the link type to IB.
Finally, in your `~/.bashrc`, `export MASTER_ADDR="10.x.0.y"`. and set your RDZV_BACKEND to be $MASTER_ADDR:$MASTER_PORT. Our master port values are usually 8001, 5001 (similar to ones a react dev would use, just a personal preference).
```python
torchrun \
--nnodes 3 \
--nproc_per_node 8 \
--node_rank $NODE_RANK \
--max-restarts $NUM_ALLOWED_FAILURES \
--rdzv-id $RDZV_ID \
--rdzv-backend $RDZV_BACKEND \
--rdzv-endpoint $RDZV_ENDPOINT \
scripts/train/pretrain.py \
```
Maybe these can be incorporated into the documentation? Feel free to reach out to me personally to talk more about optimized trainings including setting FlashAttention, BFloat16, Webdatasets, reading directly from raid, FSDP etc.
<img width="1052" alt="Image" src="https://github.com/user-attachments/assets/919664b1-ba40-4c6f-a7c9-280a8c3d275a" />
`https://pytorch.org/docs/stable/elastic/run.html`
### Suggest a potential alternative/fix
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @svekars @brycebortree @sekyondaMeta @AlannaBurke
| true
|
2,787,642,883
|
`torch.profiler.record_function` doesn't register kernels from `backward` function
|
anmyachev
|
open
|
[
"oncall: profiler"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
`__profile_kernel_of_func` (`record_function` label) shows zero timings for XPU (maybe for `CUDA` the situation is the same, but I have no way to check) unless `record_function` is used inside `backward` function.
```python
import torch
from torch.profiler import profile, ProfilerActivity, record_function
cache_size = 256
device = "xpu"
class _attention(torch.autograd.Function):
@staticmethod
def forward(ctx, cache):
ctx.save_for_backward(cache)
return cache
@staticmethod
def backward(ctx, triton_do):
cache = ctx.saved_tensors
# with record_function("__profile_kernel_of_func"): <- using this you can get the necessary timings
cache[0].zero_()
return cache
attention = _attention.apply
cache = torch.randn((128, 128), dtype=torch.float32, device=device, requires_grad=True)
triton_o = attention(cache)
triton_do = torch.randn_like(triton_o)
triton_fn = lambda: triton_o.backward(triton_do, retain_graph=True)
with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.XPU]) as prof:
with record_function("__profile_kernel_of_func"):
triton_fn()
torch.xpu.synchronize()
print(prof.events())
```
Output:
```
# case1 - `record_function` is not used in `backward` function and one can see that timings are zero
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self XPU Self XPU % XPU total XPU time avg # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
__profile_kernel_of_func 90.70% 193.820ms 90.70% 193.820ms 193.820ms 0.000us 0.00% 0.000us 0.000us 1
autograd::engine::evaluate_function: _attentionBackw... 0.01% 20.711us 3.38% 7.220ms 7.220ms 0.000us 0.00% 8.640us 8.640us 1
_attentionBackward 0.04% 86.246us 3.37% 7.199ms 7.199ms 0.000us 0.00% 8.640us 8.640us 1
aten::zero_ 0.03% 53.549us 3.33% 7.113ms 7.113ms 0.000us 0.00% 8.640us 8.640us 1
aten::fill_ 3.17% 6.772ms 3.30% 7.059ms 7.059ms 8.640us 50.00% 8.640us 8.640us 1
urEnqueueKernelLaunch 0.13% 287.035us 0.13% 287.035us 287.035us 0.000us 0.00% 0.000us 0.000us 1
at::native::xpu::VectorizedElementwiseKernel<4, at::... 0.00% 0.000us 0.00% 0.000us 0.000us 8.640us 50.00% 8.640us 8.640us 1
autograd::engine::evaluate_function: torch::autograd... 0.00% 5.280us 5.93% 12.663ms 12.663ms 0.000us 0.00% 8.640us 8.640us 1
torch::autograd::AccumulateGrad 0.02% 34.944us 5.92% 12.658ms 12.658ms 0.000us 0.00% 8.640us 8.640us 1
aten::new_empty_strided 0.00% 7.971us 0.01% 24.983us 24.983us 0.000us 0.00% 0.000us 0.000us 1
aten::empty_strided 0.01% 17.012us 0.01% 17.012us 17.012us 0.000us 0.00% 0.000us 0.000us 1
aten::copy_ 5.86% 12.514ms 5.90% 12.598ms 12.598ms 8.640us 50.00% 8.640us 8.640us 1
urEnqueueKernelLaunch 0.04% 83.934us 0.04% 83.934us 83.934us 0.000us 0.00% 0.000us 0.000us 1
at::native::xpu::VectorizedElementwiseKernel<4, at::... 0.00% 0.000us 0.00% 0.000us 0.000us 8.640us 50.00% 8.640us 8.640us 1
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 213.703ms
Self XPU time total: 17.280us
case #2 - `record_function` is used in `backward` function
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self XPU Self XPU % XPU total XPU time avg # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
__profile_kernel_of_func 90.60% 218.883ms 90.60% 218.883ms 218.883ms 0.000us 0.00% 0.000us 0.000us 1
autograd::engine::evaluate_function: _attentionBackw... 0.01% 20.057us 3.60% 8.690ms 8.690ms 0.000us 0.00% 8.320us 8.320us 1
_attentionBackward 0.03% 81.833us 3.59% 8.670ms 8.670ms 0.000us 0.00% 8.320us 8.320us 1
__profile_kernel_of_func 0.10% 230.601us 3.55% 8.588ms 8.588ms 0.000us 0.00% 8.320us 8.320us 1
aten::zero_ 0.02% 54.334us 3.46% 8.358ms 8.358ms 0.000us 0.00% 8.320us 8.320us 1
aten::fill_ 3.30% 7.984ms 3.44% 8.304ms 8.304ms 8.320us 49.06% 8.320us 8.320us 1
urEnqueueKernelLaunch 0.13% 319.102us 0.13% 319.102us 319.102us 0.000us 0.00% 0.000us 0.000us 1
at::native::xpu::VectorizedElementwiseKernel<4, at::... 0.00% 0.000us 0.00% 0.000us 0.000us 8.320us 49.06% 8.320us 8.320us 1
autograd::engine::evaluate_function: torch::autograd... 0.00% 6.131us 5.80% 14.019ms 14.019ms 0.000us 0.00% 8.640us 8.640us 1
torch::autograd::AccumulateGrad 0.02% 42.298us 5.80% 14.013ms 14.013ms 0.000us 0.00% 8.640us 8.640us 1
aten::new_empty_strided 0.00% 9.754us 0.01% 26.166us 26.166us 0.000us 0.00% 0.000us 0.000us 1
aten::empty_strided 0.01% 16.412us 0.01% 16.412us 16.412us 0.000us 0.00% 0.000us 0.000us 1
aten::copy_ 5.73% 13.855ms 5.77% 13.945ms 13.945ms 8.640us 50.94% 8.640us 8.640us 1
urEnqueueKernelLaunch 0.04% 90.206us 0.04% 90.206us 90.206us 0.000us 0.00% 0.000us 0.000us 1
at::native::xpu::VectorizedElementwiseKernel<4, at::... 0.00% 0.000us 0.00% 0.000us 0.000us 8.640us 50.94% 8.640us 8.640us 1
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 241.593ms
Self XPU time total: 16.960us
```
### Versions
Pytorch pin: `1e881ceecfe80532206ca4e0acb64391fab8b935`.
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,787,609,917
|
[ROCm] CK SDPA - Move arch check to CK patch
|
alugorey
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: releng",
"skip-pr-sanity-checks",
"rocm",
"ciflow/rocm"
] | 11
|
CONTRIBUTOR
|
__gfxXXX__ should only be visible by device code. Move the check to the ck kernel
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @albanD
| true
|
2,787,598,412
|
Slow performance when running TransformerDecoder with low batch size in fp16
|
DaniNem
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
Hey!
I want to understand why this snippet of code runs slow for batch sizes of 2/4/8 when using fp16, the time it takes for bs=2 on my system is 0.55 sec for no batch version and 2.8 sec for the batched on.
```
from torch import nn
import torch
import time
from torch.amp import autocast
device = torch.device('cuda')
hidden_dim = 1024
USE_FP16 = True
BS = 2
TOTAL_INPUT = 250
decoder_layer = nn.TransformerDecoderLayer(d_model=hidden_dim,
dim_feedforward=hidden_dim*4,nhead=8,batch_first=True,)
transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=3,).to(device)
with autocast('cuda', enabled=USE_FP16):
with torch.no_grad():
memory = torch.rand((1, 1000, hidden_dim), device=device)
tgt = torch.rand((1, 36, hidden_dim), device=device)
out = transformer_decoder(tgt, memory)
s = time.perf_counter()
for _ in range(TOTAL_INPUT):
out = transformer_decoder(tgt, memory)
print(time.perf_counter() - s)
memory = torch.rand((BS, 1000, hidden_dim), device=device)
tgt = torch.rand((BS, 36, hidden_dim), device=device)
s = time.perf_counter()
for _ in range(TOTAL_INPUT // BS):
out = transformer_decoder(tgt, memory)
print(time.perf_counter() - s)
```
Thanks!
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.12.7 (main, Dec 9 2024, 15:02:40) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 4000
Nvidia driver version: 535.86.10
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
Stepping: 10
CPU MHz: 3200.000
CPU max MHz: 4600.0000
CPU min MHz: 800.0000
BogoMIPS: 6399.96
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 1.5 MiB
L3 cache: 12 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.5.1+cu121
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] Could not collect
| true
|
2,787,593,946
|
compile time regression 1/9
|
zou3519
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: compile-time"
] | 16
|
CONTRIBUTOR
|
[TorchInductor OSS Compile Time Dashboard](https://www.internalfb.com/intern/unidash/dashboard/?tab_id=1587385408528217)
- torchbench inference: sam_fast_dynamo_benchmark 70->84
- HF inference: BartForConditionalGeneration 32->42
- TIMM inference (a lot of models regressed)
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu @oulgen @jamesjwu @aorenste @anijain2305 @laithsakka
| true
|
2,787,557,214
|
Enable CPP Extension Open Registration tests on Arm
|
murste01
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Enables most tests under CPP Extension Open Registration as they pass on Arm now.
| true
|
2,787,518,284
|
[caffe2] Use the manifold cache backend as the default
|
AishwaryaSivaraman
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 17
|
CONTRIBUTOR
|
Test Plan: CI
D68155591
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,787,506,259
|
aot_inductor TIMM convit_base inference regression on dashboard
|
zou3519
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor",
"pt2-pass-rate-regression"
] | 2
|
CONTRIBUTOR
|
See https://hud.pytorch.org/benchmark/timm_models/inductor_aot_inductor?dashboard=torchinductor&startTime=Tue,%2031%20Dec%202024%2015:26:32%20GMT&stopTime=Tue,%2014%20Jan%202025%2015:26:32%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=main&lCommit=1dab79470dbecef79ba4c7d4308d8a181091e58e&rBranch=main&rCommit=01034e963c9102c6a4a666c7666afd12aee0bfb3
The model reports a pass but the speedup numbers have gone to 0, which might imply that something went wrong in the reporting process?
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu @desertfire @chenyang78 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @huydhn
| true
|
2,787,470,791
|
Removed unused _RequiredParameter
|
dmpiergiacomo
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"topic: bc breaking",
"ciflow/inductor",
"release notes: optim",
"ci-no-td"
] | 26
|
CONTRIBUTOR
|
As per this [discussion](https://discuss.pytorch.org/t/a-question-about-requiredparameter/137977), I figured that `_RequiredParameter` is no longer used.
The `required` object was initially introduced in this [PR](https://github.com/pytorch/pytorch/commit/4db66679238dae8539c270a61f60b9c0c4bb440d) as the `SGD` optimizer did not offer a default value for the learning rate. However there isn't a single place in the code base using `_RequiredParameter`, nor `required`. I am therefore removing unused `_RequiredParameter` and `required`.
Everything not included in this PR is Not a Contribution.
| true
|
2,787,298,410
|
[ARM] multiple test failures in TestQuantizedConv on Aarch64
|
robert-hardwick
|
open
|
[
"oncall: quantization",
"module: tests",
"module: arm"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
After enabling 'test_quantization" we consistently see 2 test failures on all Aarch64 platforms.
**TestQuantizedConv.test_qconv2d_relu** and **TestQuantizedConv.test_qconv2d**
```
The failure output is
AssertionError:
Arrays are not almost equal to 0 decimals
X: tensor([[[[0.0000, 0.0000, 2.4028, ..., 0.0000, 0.0000, 3.6042],
... contd..
size=(1, 54, 10, 7), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=1.2014017519687867,
zero_point=0), W: tensor([[[[ 0.0000]],
.... contd...
quantization_scheme=torch.per_tensor_affine, scale=0.32774215094962955,
zero_point=0), b: None, strides: (1, 1),
pads: (0, 0), o_pads: None, dilations: (1, 1),
groups: 27, y_s: 4.200000005809841, y_zp: 0
Mismatched elements: 23 / 9450 (0.243%)
Max absolute difference: 255
Max relative difference: 255.
x: array([[[[0, 0, 0, ..., 0, 1, 1],
[0, 1, 0, ..., 0, 1, 0],
[0, 0, 0, ..., 1, 1, 0],...
y: array([[[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],...
To execute this test, run the following from the base repo dir:
python test/quantization/core/test_quantized_op.py TestQuantizedConv.test_qconv2d
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
The assertion is here
https://github.com/pytorch/pytorch/blob/main/test/quantization/core/test_quantized_op.py#L5182
These tests use hypothesis library and the error doesn't happen for all input combinations, but below is an example of some inputs to test_qconv2d which cause this issue.
```
W_scale=[1.3],
W_zero_point=[0],
X_scale=1.2,
X_zero_point=0,
Y_scale=1,
Y_zero_point=0,
batch_size=1,
dilation=1,
groups=27,
height=10,
input_channels_per_group=2,
kernel_h=1,
kernel_w=1,
output_channels_per_group=5,
pad_h=0,
pad_w=0,
stride_h=1,
stride_w=1,
width=7):
```
This makes it difficult to reproduce since hypothesis fixes the seed only when CI=1. Also, the line above
`python test/quantization/core/test_quantized_op.py TestQuantizedConv.test_qconv2d` doesn't work, as 'test_quantized_op.py' doesn't have `an if __name__ == "__main__" `entrypoint.
I have traced the source of the bug to ideep/oneDNN, but raising this issue here as a reference and to keep track of the fix.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git2e42be0
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+git2e42be0
[conda] No relevant packages
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @mruberry @ZainRizvi @malfet @snadampal @milpuz01
| true
|
2,787,292,491
|
Mark CUDA-12.6 as experimental for 2.6 release
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Because that's the first time we are trying to release it, and it also is the first release to use manylinux2_28
| true
|
2,787,280,299
|
Is it possible to remove NCCL submodule and use only nccl binaries from pypi instead ?
|
atalman
|
open
|
[
"module: build",
"module: cuda",
"triaged",
"module: nccl"
] | 8
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Currently we do both we have submodule:
https://github.com/pytorch/pytorch/tree/main/third_party/nccl
And we use pypi nccl binaries:
https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L62
And we have a code to check if submodule version is consistent with pypi version, here:
https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L434
We also build latest nccl from source here:
https://github.com/pytorch/pytorch/blob/main/.ci/docker/common/install_cuda.sh#L74
This prevents us to have different nccl binaries for different CUDA builds. For instance latest nccl as of Jan 14 is [2.24.3 ](https://pypi.org/project/nvidia-nccl-cu12/2.24.3/) however we are still using 2.21.5 since its compatible with the CUDA 11.8.
We would prefer to keep nccl 2.21.5 for CUDA 11.8 builds but for CUDA 12.4 and 12.6 move to a newer nccl version
Hence a question what nccl submodule is used for and can we remove it and relay only on binaries ?
cc @malfet @seemethere @ptrblck @msaroufim @eqy @albanD @kwen2501
### Versions
2.7
| true
|
2,787,236,868
|
Unable to build with ATEN_THREADING=TBB option
|
carusyte
|
open
|
[
"module: build",
"module: docs",
"triaged",
"module: tbb"
] | 4
|
NONE
|
While the doc [here](https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html) says we could set the `ATEN_THREADING` build option to TBB, I encountered the following error:
```
<- omitted previous log for brevity ->
-- Looking for backtrace
-- Looking for backtrace - found
-- backtrace facility detected in default set of libraries
-- Found Backtrace: /usr/include
-- headers outputs:
-- sources outputs:
-- declarations_yaml outputs:
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Success
-- Using ATen parallel backend: TBB
CMake Error at caffe2/CMakeLists.txt:38 (message):
Unknown ATen parallel backend: TBB
```
Seems the make file does not support this option yet?
https://github.com/pytorch/pytorch/blob/95b41d2aa43c606d65e127d4825c08baf9fcacd9/caffe2/CMakeLists.txt#L38
cc @malfet @seemethere @svekars @brycebortree @sekyondaMeta @AlannaBurke
| true
|
2,787,193,050
|
Matmul with int32 parameters on Intel GPU leads to errors
|
qwqdlt
|
open
|
[
"triaged",
"module: xpu"
] | 8
|
NONE
|
### 🐛 Describe the bug
torch.matmul with int32 parameters leads to errors, when running on XPU (Intel GPU) in the following program.
```python
import numpy as np
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.val = torch.nn.Parameter(torch.ones([1], dtype=torch.int32), requires_grad=False)
def forward(self, *args):
val = self.val
out = torch.matmul(val, args[0])
return (out)
m = Model()
inp = [np.ones([1,1], np.int32)]
m.to('cpu')
output1 = m(*[torch.from_numpy(v).to('cpu') for v in inp])
print(output1)
m.to('xpu')
output2 = m(*[torch.from_numpy(v).to('xpu') for v in inp])
print(output2)
````
### **Error Logs**
```bash
tensor([1], dtype=torch.int32)
Traceback (most recent call last):
File "/xxx/test.py", line 23, in <module>
output2 = m(*[torch.from_numpy(v).to('xpu') for v in inp])
File "/home/xxx/anaconda3/envs/intel-gpu-pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/xxx/anaconda3/envs/intel-gpu-pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/xxx/test.py", line 11, in forward
out = torch.matmul(val, args[0])
RuntimeError: could not create a primitive descriptor for a matmul primitive
```
### Versions
PyTorch version: 2.5.1+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 18
On-line CPU(s) list: 0-17
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 125H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 9
Socket(s): 1
Stepping: 4
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtop
ology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_sin
gle ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 432 KiB (9 instances)
L1i cache: 576 KiB (9 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241211
[pip3] pytorch-triton-xpu==3.1.0
[pip3] torch==2.5.1+xpu
[pip3] torchaudio==2.5.1+xpu
[pip3] torchvision==0.20.1+xpu
[conda] numpy 2.1.3 pypi_0 pypi
[conda] pytorch-triton-xpu 3.1.0 pypi_0 pypi
[conda] torch 2.5.1+xpu pypi_0 pypi
[conda] torchaudio 2.5.1+xpu pypi_0 pypi
[conda] torchvision 0.20.1+xpu pypi_0 pypi
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,787,176,572
|
Fix full_like decomposition to preserve strides
|
isuruf
|
open
|
[
"oncall: distributed",
"open source",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144765
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,787,054,655
|
EZ fix to make sure local pytest run succeeds in export
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144764
Previously run_tests() was protected under IS_FBCODE flag so that following works:
```
python test/export/test_export_legacy.py
```
But it fails on:
```
pytest test/export/test_export_legacy.py
```
This is because pytest doesn't seem to get triggered through run_tests().
Differential Revision: [D68152737](https://our.internmc.facebook.com/intern/diff/D68152737)
| true
|
2,787,020,727
|
Unconditional dependency on setuptools
|
adamjstewart
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 33
|
CONTRIBUTOR
|
In [segmentation-models.pytorch](https://github.com/qubvel-org/segmentation_models.pytorch/actions/runs/12753548754/job/35545453127#step:6:3442), we noticed that actions like `torch.compile` actually require setuptools for all Python versions. Setuptools is unconditionally imported in `torch/utils/cpp_extension.py`. This PR changes the setuptools dependency to be required at runtime for all Python versions.
| true
|
2,786,988,144
|
Remove optimization pass to reduce number of copies in export IR
|
tugsbayasgalan
|
closed
|
[
"ciflow/trunk",
"release notes: export"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144762
This pass seems like a big headache because it fails to distinguish user-introduced copy_ vs the ones export introduce. The original motivation was that when we do run_decompositions, we unlift the exported program which introduces some inplace-update nodes at the end of the graph to the buffers that have been mutated. This is necessary because we don't want to have extra outputs for unlifted module since it should have same calling convention as eager module.
These inplace-update nodes, however, get functionalized which is bad for perf now that we have extra copy nodes in the end of graph. Let's see what happens when we try disabling it. If it causes some performance regressions, then we should solve this for real. Some ideas:
1. Make the copy_ for the final update HOP that we special handle in decompositions.
2. When doing inference-to-inference, instead of retracing, we just only local decompositions per operator. We kind of used to do that but it is hard to keep this in sync with rest of export as time goes.
Differential Revision: [D68152886](https://our.internmc.facebook.com/intern/diff/D68152886)
| true
|
2,786,918,451
|
torch.rand_like() for nested tensors
|
kkj15dk
|
closed
|
[
"triaged",
"module: nestedtensor"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
torch.randn_like() works for nested tensors, however torch.rand_like() does not
Is there a reason for this, I was imagining the implementation would be quite similar?
I need to change elements in a nested tensor randomly, the probability given by a uniform distribution. Maybe i can use drop_out in some weird sense, but it has to work in both training and inference.
### Alternatives
_No response_
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,786,694,443
|
[Intel CPU] Fix issue #143482.
|
RanTao123
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Fix issue in https://github.com/pytorch/pytorch/issues/143482.
To aviod out-of-bound aceess, values in indices should be less than num_weights.
| true
|
2,786,606,462
|
[Intel GPU] Avoid unnecessary copy when the dst of Matmul is non-contiguous
|
jianyizh
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"module: xpu"
] | 13
|
CONTRIBUTOR
|
We should not always call contiguous on the dst of matmul. We have already removed copy of matmul input in https://github.com/pytorch/pytorch/pull/143784
I also fixed an accuracy issue by using onednn sum post op instead of binary add in the case of inplace to avoid UT failure.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,786,463,069
|
[APS] Update proxy_tensor to take kwargs
|
yvonne-lab
|
closed
|
[
"fb-exported",
"Stale",
"release notes: fx",
"fx"
] | 5
|
NONE
|
Summary: `__init__` should be able to take kwargs. This diff updates `AttrProxy` to take kwargs when initializing the proxy class.
Test Plan: Existing unit tests
Differential Revision: D68092190
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,786,431,845
|
Installation is broken: `pip3 install torch torchvision torchaudio` fails
|
hirak99
|
closed
|
[
"module: binaries",
"triaged",
"module: python version"
] | 4
|
NONE
|
### 🐛 Describe the bug
Installation is broken.
Following the instructions here, https://pytorch.org/get-started/locally/ if I select (Stable, Linux, Pip, Python, Cuda 12.4), it says I should run the following - `pip3 install torch torchvision torchaudio`.
**Replication of Issue**
Create a new venv and run this -
```sh
pip3 install torch torchvision torchaudio
```
**Error**
```
$ pip3 install torch torchvision torchaudio
Collecting torch
Downloading torch-2.5.1-cp313-cp313-manylinux1_x86_64.whl.metadata (28 kB)
ERROR: Ignored the following yanked versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3
ERROR: Could not find a version that satisfies the requirement torchvision (from versions: none)
ERROR: No matching distribution found for torchvision
```
**More Information**
My OS: Arch Linux with Python 3.13.1
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 19.1.6
CMake version: version 3.31.4
Libc version: glibc-2.40
Python version: 3.13.1 (main, Dec 4 2024, 18:05:56) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.12.9-arch1-1-x86_64-with-glibc2.40
Is CUDA available: N/A
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 565.77
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 12
CPU(s) scaling MHz: 60%
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7202.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi sgx_lc md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 2 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
```
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,786,384,412
|
[Reopen] [Intel GPU] Set higher tolerance for some models only on XPU Device
|
retonym
|
open
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ci-no-td"
] | 20
|
CONTRIBUTOR
|
Reopen the previous stale closed PR https://github.com/pytorch/pytorch/pull/134192
We need to increase the tolerance slightly to ensure that certain models pass accuracy check on the XPU device.
This pull request preserves the original tolerance threshold for the CUDA device and introduces a new key higher_fp16_bf16_xpu, which only impacts the XPU device.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,786,333,527
|
[fsdp2] maybe unreliable condition check to enforce post_backward()
|
leonardo0lyj
|
closed
|
[
"triaged",
"module: fsdp"
] | 0
|
NONE
|
Hey Andrew @awgu,
As a big fan of FSDP2, I keep posting improvement 😄
This line of condition check to enforce`post_backward()` at root backward maybe a bit unreliable:
`if fsdp_param_group and (fsdp_param_group.is_unsharded
or not fsdp_param_group.unshard_in_backward):
fsdp_param_group.post_backward()`
https://github.com/pytorch/pytorch/blob/17e05cde0c405dad11a17bcdc0f85f941dcc6c94/torch/distributed/fsdp/_fully_shard/_fsdp_state.py#L281
Because we have a flag called `reshard_after_backward`; when it is `True`, the already called `post_backward()` will have params in unsharded state (`fsdp_param_group.is_unsharded is True`), and in `_root_post_backward_final_callback` the already called `post_backward()` is called twice due to this unreliable condition check, which can cause logical complexity and prone to error.
Intuitively, this condition check should prevent twice-call of post-backward and only capture only uncalled `post_backward()`. For example: `if (fsdp_param_group and fsdp_param_group._training_state != TrainingState.POST_BACKWARD ...): `
How do you think 😁
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang
| true
|
2,786,321,057
|
[WIP][Intel GPU] [pt2e] remove h2d copy of scale and zero point in int8 conv
|
jianyizh
|
closed
|
[
"module: cpu",
"open source",
"release notes: quantization",
"module: inductor"
] | 2
|
CONTRIBUTOR
|
Not ready for review
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,786,241,683
|
[DCP] Fix fsspec fsync bug on .finish()
|
cassanof
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Fixes #144752
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,786,238,092
|
[DCP] BUG: FsspecWriter calls os.fsync on .finish(), therefore program crashes on checkpoint save
|
cassanof
|
closed
|
[
"triaged",
"oncall: distributed checkpointing"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
With Fsspec, you can't fsync on a file. This has caused bugs during the write phase, which were fixed here: https://github.com/pytorch/pytorch/pull/119287
However, this issue has not been fixed for the .finish method, which is inherited by `FsspecWriter`:
https://github.com/pytorch/pytorch/blob/17e05cde0c405dad11a17bcdc0f85f941dcc6c94/torch/distributed/checkpoint/_fsspec_filesystem.py#L91
and
https://github.com/pytorch/pytorch/blob/17e05cde0c405dad11a17bcdc0f85f941dcc6c94/torch/distributed/checkpoint/filesystem.py#L587
So, people will get this error:
```
[rank7]: File "/mnt/large_shared/federico/env_nightly/lib/python3.11/site-packages/torch/distributed/checkpoint/logger.py", line 83, in wrapper
[rank7]: result = func(*args, **kwargs)
[rank7]: ^^^^^^^^^^^^^^^^^^^^^
[rank7]: File "/mnt/large_shared/federico/env_nightly/lib/python3.11/site-packages/torch/distributed/checkpoint/utils.py", line 429, in inner_func
[rank7]: return func(*args, **kwargs)
[rank7]: ^^^^^^^^^^^^^^^^^^^^^
[rank7]: File "/mnt/large_shared/federico/env_nightly/lib/python3.11/site-packages/torch/distributed/checkpoint/state_dict_saver.py", line 152, in save
[rank7]: return _save_state_dict(
[rank7]: ^^^^^^^^^^^^^^^^^
[rank7]: File "/mnt/large_shared/federico/env_nightly/lib/python3.11/site-packages/torch/distributed/checkpoint/state_dict_saver.py", line 334, in _save_state_dict
[rank7]: return distW.all_reduce("write", write_data, finish_checkpoint)
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank7]: File "/mnt/large_shared/federico/env_nightly/lib/python3.11/site-packages/torch/distributed/checkpoint/utils.py", line 231, in all_reduce
[rank7]: raise final_result
[rank7]: torch.distributed.checkpoint.api.CheckpointException: CheckpointException ranks:dict_keys([0])
[rank7]: Traceback (most recent call last): (RANK 0)
[rank7]: File "/mnt/large_shared/federico/env_nightly/lib/python3.11/site-packages/torch/distributed/checkpoint/utils.py", line 222, in all_reduce
[rank7]: result = reduce_fun(cast(List[T], all_data))
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank7]: File "/mnt/large_shared/federico/env_nightly/lib/python3.11/site-packages/torch/distributed/checkpoint/logger.py", line 83, in wrapper
[rank7]: result = func(*args, **kwargs)
[rank7]: ^^^^^^^^^^^^^^^^^^^^^
[rank7]: File "/mnt/large_shared/federico/env_nightly/lib/python3.11/site-packages/torch/distributed/checkpoint/state_dict_saver.py", line 331, in finish_checkpoint
[rank7]: storage_writer.finish(metadata=global_metadata, results=all_results)
[rank7]: File "/mnt/large_shared/federico/env_nightly/lib/python3.11/site-packages/torch/distributed/checkpoint/filesystem.py", line 588, in finish
[rank7]: os.fsync(metadata_file.fileno())
[rank7]: ^^^^^^^^^^^^^^^^^^^^^^
[rank7]: io.UnsupportedOperation: fileno
```
To fix this issue, we can just add `UnsupportedOperation` to the try catch above.
### Versions
----
cc @LucasLLC @pradeepfn @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,786,226,212
|
[dynamo] Add `--profile-details` and `--export-perfdoctor` option
|
xuzhao9
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 43
|
CONTRIBUTOR
|
Summary:
Add `--profile-details` option to add shapes and other details to the Kineto profile.
Add `--export-perfdoctor` to directly dump trace to perfdoctor for webview.
Test Plan:
```
$ buck2 run mode/opt //caffe2/benchmarks/dynamo:torchbench_internal -- --only mrs_video_watch_over --performance --training --amp --export-profiler-trace --backend=inductor --profile-details --export-perfdoctor
```
https://interncache-all.fbcdn.net/manifold/perfetto-artifacts/tree/ui/index.html#!/?url=https://interncache-all.fbcdn.net/manifold/pyper_traces/tree/traces/test/inductor_mrs_video_watch_over_rank_0_20250113_173817_6535183793.json.gz
Differential Revision: D68134547
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,786,216,298
|
Failed to export the model to ONNX
|
asdfmnbvuj
|
closed
|
[
"module: onnx",
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
onnx_program = torch.onnx.dynamo_export(self.enconder_dust, (x,pos))
### Versions
Traceback (most recent call last):
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 1222, in dynamo_export
).export()
^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 976, in export
graph_module = self.options.fx_tracer.generate_fx(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 198, in generate_fx
graph_module, graph_guard = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1305, in inner
combined_args = _combine_args(_f, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/dynamic_shapes.py", line 569, in _combine_args
return signature.bind(*args, **kwargs).arguments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/inspect.py", line 3195, in bind
return self._bind(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/inspect.py", line 3110, in _bind
raise TypeError(msg) from None
TypeError: missing a required argument: 'pos'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/xujinqing/t_rt/InstantSplat_old/./coarse_init_infer.py", line 77, in <module>
output = inference(pairs, model, args.device, batch_size=batch_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/inference.py", line 69, in inference
res = loss_of_one_batch(collate_with_cat(pairs[i:i+batch_size]), model, None, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/inference.py", line 47, in loss_of_one_batch
pred1, pred2 = model(view1, view2)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 269, in forward
(shape1, shape2), (feat1, feat2), (pos1, pos2) = self._encode_symmetrized(view1, view2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 195, in _encode_symmetrized
feat1, feat2, pos1, pos2 = self._encode_image_pairs(img1, img2, shape1, shape2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 176, in _encode_image_pairs
out, pos, _ = self._encode_image(img1, true_shape1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 159, in _encode_image
onnx_program = torch.onnx.dynamo_export(self.enconder_dust, (x,pos))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/__init__.py", line 517, in dynamo_export
return dynamo_export(
^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 1233, in dynamo_export
raise errors.OnnxExporterError(message) from e
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer (https://microsoft.github.io/sarif-web-component/). Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
| true
|
2,786,215,632
|
fix torch.atan for torch.complex datatypes on CPU
|
jiayisunx
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: complex"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144749
Fix https://github.com/pytorch/pytorch/issues/141487.
This issue is caused by the lack of special handling of the case where the real number/imag number is 0/Inf/NaN in the vectorized implementation of `atan`. For correctness, I temporarily fallback the implementation of `atan` to scalar implementation.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,786,178,498
|
[inductor] [cuda] [fake tensor] `torch.nextafter` loose the check for different device tensor on inductor
|
shaoyuyoung
|
open
|
[
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: inductor",
"module: pt2-dispatcher"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Actually, I am not sure whether it is the eager issue or inductor?
Because from my personal understanding, I think eager should pass the check like `torch.add` (`x = torch.nextafter(x, torch.tensor(1.0))` can pass the check)
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.enable_grad(False)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
x = torch.nextafter(x, torch.tensor(1.0))
return x
model = Model().cuda()
x = torch.randn(1).cuda()
inputs = [x]
try:
output = model(*inputs)
except Exception as e:
print("fails on eager")
print(e)
try:
model = torch.compile(model)
output = model(*inputs)
except Exception as e:
print("fails on inductor")
print(e)
```
log
```
fails on eager
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument other in method wrapper_CUDA_nextafter)
```
### Versions
PyTorch version: 2.7.0.dev20250112+cu124
GPU: Tesla V100-SXM2-32GB
<details>
<summary>click here for detailed env</summary>
```
PyTorch version: 2.7.0.dev20250112+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 550.142
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250112+cu124
[pip3] torchaudio==2.6.0.dev20250112+cu124
[pip3] torchvision==0.22.0.dev20250112+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250112+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250112+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250112+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @bdhirsh
| true
|
2,786,174,062
|
add fp8 support to index_cuda
|
danielvegamyhre
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 6
|
CONTRIBUTOR
|
Fixes #133605
**Summary**
This PR adds support for FP8 data types to the `index_cuda` op.
It uses `AT_DISPATCH_V2` which is a new macro that can handle arbitrary number of dtypes, as opposed to the old implementations which had a separate macro for each possible number of dtype arguments (e.g. `AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND{2,3,4,5...}`).
**Test plan**
Updated test `index_cuda_with_cpu` in `test/test_fake_tensor.py` to have cases for all dtypes handled by `index_cuda`, including fp8 dtypes.
| true
|
2,786,167,538
|
expose extra torch_python apis
|
garfield1997
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Fixes #144302
After checking the code of my third-party devices, I think these APIs are also relied on by us, so I exposed them according to the discussion in the issue.
| true
|
2,786,151,984
|
[BE] Make a SymbolInfo NamedTuple
|
ezyang
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144745
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,786,151,846
|
Allow GradientEdge as torch.autograd.backward outputs
|
soulitzer
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: autograd",
"topic: improvements"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144744
| true
|
2,786,147,732
|
[BE] Remove lambda from str
|
ezyang
|
closed
|
[
"Merged",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144745
* __->__ #144743
* #144471
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,786,134,646
|
outerNode->outputs().size()
|
asdfmnbvuj
|
closed
|
[
"oncall: jit",
"module: onnx"
] | 1
|
NONE
|
### 🐛 Describe the bug
RuntimeError: outerNode->outputs().size() == node->inputs().size() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1729647348947/work/torch/csrc/jit/passes/dead_code_elimination.cpp":138, please report a bug to PyTorch.
Traceback (most recent call last):
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 90, in __call__
exported_program = self._capture(model, args, kwargs, dynamic_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_capture_strategies.py", line 124, in _capture
return torch.export.export(
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/__init__.py", line 270, in export
return _export(
^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 1017, in wrapper
raise e
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 990, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/exported_program.py", line 114, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 1880, in _export
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 1224, in _strict_export
return _strict_export_lower_to_aten_ir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 1252, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/export/_trace.py", line 560, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1432, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 385, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 385, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 385, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 1024, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 774, in call_method
return self.call_apply(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 723, in call_apply
return variables.UserFunctionVariable(fn, source=source).call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 727, in call_function
unimplemented(msg)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/_dynamo/exc.py", line 297, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin models.curope.curope.PyCapsule.rope_2d. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
from user code:
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/croco.py", line 260, in forward
x = blk(x,pos)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/blocks.py", line 129, in forward
x = x + self.drop_path(self.attn(y, xpos))
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/blocks.py", line 102, in forward
q = self.rope(q, xpos)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/curope/curope2d.py", line 39, in forward
cuRoPE2D_func.apply( tokens.transpose(1,2), positions, self.base, self.F0 )
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/curope/curope2d.py", line 20, in forward
_kernels.rope_2d( tokens, positions, base, F0 )
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/xujinqing/t_rt/InstantSplat_old/./coarse_init_infer.py", line 77, in <module>
output = inference(pairs, model, args.device, batch_size=batch_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/inference.py", line 69, in inference
res = loss_of_one_batch(collate_with_cat(pairs[i:i+batch_size]), model, None, device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/inference.py", line 47, in loss_of_one_batch
pred1, pred2 = model(view1, view2)
^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 271, in forward
(shape1, shape2), (feat1, feat2), (pos1, pos2) = self._encode_symmetrized(view1, view2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 197, in _encode_symmetrized
feat1, feat2, pos1, pos2 = self._encode_image_pairs(img1, img2, shape1, shape2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 178, in _encode_image_pairs
out, pos, _ = self._encode_image(img1, true_shape1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/dust3r/model.py", line 164, in _encode_image
torch.onnx.export(self.enconder_dust, (torch.rand(1,640,1024).cuda(),pos), './uu.onnx', verbose=False, opset_version=18, enable_onnx_checker=False, do_constant_folding=True,dynamo=True)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/__init__.py", line 345, in export
return exporter.export_compat(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_compat.py", line 161, in export_compat
onnx_program = _core.export(
^^^^^^^^^^^^^
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/onnx/_internal/exporter/_core.py", line 1057, in export
raise _errors.TorchExportError(
torch.onnx._internal.exporter._errors.TorchExportError: Failed to export the model with torch.export. This is step 1/2 of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and summit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the *torch.export* component and attach the full error stack as well as reproduction scripts.
## Exception summary
<class 'torch._dynamo.exc.Unsupported'>: Graph break due to unsupported builtin models.curope.curope.PyCapsule.rope_2d. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
from user code:
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/croco.py", line 260, in forward
x = blk(x,pos)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/blocks.py", line 129, in forward
x = x + self.drop_path(self.attn(y, xpos))
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/blocks.py", line 102, in forward
q = self.rope(q, xpos)
File "/opt/miniconda3/envs/tensor_rt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/curope/curope2d.py", line 39, in forward
cuRoPE2D_func.apply( tokens.transpose(1,2), positions, self.base, self.F0 )
File "/home/xujinqing/t_rt/InstantSplat_old/submodules/dust3r/croco/models/curope/curope2d.py", line 20, in forward
_kernels.rope_2d( tokens, positions, base, F0 )
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
### Versions
torch2trt 0.5.0 pypi_0 pypi
torchtriton 3.1.0 py311 pytorch
torchvision 0.20.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,786,114,801
|
[cherry-pick][dtensor] expose the __create_chunk_list__ in the doc (#144100)
|
wanchaol
|
closed
|
[
"oncall: distributed",
"open source",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
as titled, this PR expose this dunder method as a public API in the doc, so that different checkpoint implementations can leverage this protocol, instead of exposing a separate API
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144100
Approved by: https://github.com/awgu
ghstack dependencies: #144099
(cherry picked from commit eb7a303d21c247b50e095b0b4768d5b5c4ac2285)
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,786,110,463
|
[cherry-pick] [dtensor] improve doc of the DTensor class (#144099)
|
wanchaol
|
closed
|
[
"oncall: distributed",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
as titled: explicitly list all public members to make sure the public API stays consistent, also use groupwise as the member order to make doc look better
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144099
Approved by: https://github.com/awgu
(cherry picked from commit 48a05ee7735709406b782474e66f0c6231e2ad2e)
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,786,076,408
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 6
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [22cc419e4e60f469341712a5a103fa309a7dfd48](https://github.com/intel/torch-xpu-ops/commit/22cc419e4e60f469341712a5a103fa309a7dfd48), includes:
- Fix building issue https://github.com/intel/torch-xpu-ops/issues/1279
- Aten operator coverage improvement
Note: new torch-xpu-ops commit don't support bundle 0.5.3
| true
|
2,786,073,401
|
Back out "[Submodule] Upgrade to Cutlass 3.6"
|
drisspg
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: sparse"
] | 6
|
CONTRIBUTOR
|
Summary: Revert due to perf regressions see: https://github.com/pytorch/pytorch/issues/144729
Test Plan: sand castle
Differential Revision: D68137326
| true
|
2,786,066,893
|
Revert D67866269
|
drisspg
|
closed
|
[
"fb-exported",
"release notes: sparse"
] | 3
|
CONTRIBUTOR
|
Summary:
This diff reverts D67866269
https://www.internalfb.com/tasks/?t=212439515
Perf regression
Test Plan: NA
Differential Revision: D68137255
| true
|
2,785,993,987
|
[inductor] fix index.Tensor fallback
|
shunting314
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144736
The original issue is we see accuracy problem in a meta internal model [meta internal link](https://fb.workplace.com/groups/1075192433118967/posts/1567334737238065/). The debugging is hard but the root cause is relatively simple. The root cause is that the model has mix-device inputs for index.Tensor which causes Inductor to fallback. And the meta kernel for index.Tensor returns a tensor with inconsistent strides to the eager kernel.
The following code snippet
```
import torch
from torch._subclasses import FakeTensorMode
device = "cuda"
x = torch.randn((24, 16, 32, 32), device=device).to(memory_format=torch.channels_last)
x = x.view(2, 12, 16, 32, 32)
i1 = torch.arange(2).unsqueeze(-1)
i2 = torch.argsort(torch.rand(2, 12), dim=-1)[:, :3]
print(f"Eager stride: {x[i1, i2].stride()}")
mode = FakeTensorMode()
with mode:
f_x = mode.from_tensor(x)
f_i1 = mode.from_tensor(i1)
f_i2 = mode.from_tensor(i2)
f_out = f_x[f_i1, f_i2]
print(f"Meta stride: {f_out.stride()}")
```
would output:
```
Eager stride: (49152, 16384, 1, 512, 16)
Meta stride: (49152, 16384, 1024, 32, 1)
```
In this PR, I fix the problem to run eager kernel to get the index.Tensor fallback's output layout. A better solution would be to change meta/eager kernel implementation so that their output layout matches. But I'm not sure how to properly do that.
In the index.Tensor meta kernel, we always produce dense output: https://github.com/pytorch/pytorch/blob/6d56277682715e56cfdfcaff6f770acebda966d7/torch/_meta_registrations.py#L3184 . While the eager kernel seems to leverage TensorIteratorBase to decide some dimension permutation: https://github.com/pytorch/pytorch/blob/6d56277682715e56cfdfcaff6f770acebda966d7/aten/src/ATen/TensorIterator.cpp#L232-L308 . We can duplicate this logic to the meta kernel implementation if we really want meta matches eager. I can follow up on this if people have strong opinion to do this.
And here is an issue https://github.com/pytorch/pytorch/issues/144717 for asserting size/strides for fallback kernels. With that, the issue debugged here would be much easier to root cause.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,785,988,105
|
Use random64 in Fischer-Yates algorithm for large N (#143682)
|
kit1980
|
closed
|
[] | 1
|
CONTRIBUTOR
|
Fixes bug in randperm https://nbsanity.com/static/a4774194938414dedcec7d6e99727d31/Shuffling_20in_20torch_20vs_20numpy-public.html
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143682
Approved by: https://github.com/eqy, https://github.com/albanD, https://github.com/malfet
Fixes #ISSUE_NUMBER
| true
|
2,785,962,154
|
[Pipelining] fix test_schedule.py (missing destroy_process_group
|
wconstab
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144734
* #144596
* #144352
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,785,960,982
|
optimize the decomposition of aten.native_group_norm
|
jiayisunx
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 10
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144733
Summary:
Optimize the decomposition of aten.native_group_norm. Reduce unnecessary repeated operations by changing the order of operations for `mean`, `rstd`, `weight`, `bias `and `input`, which can improve performance when `flattened_inner_size `is large.
The original decomposition:
1. compute `mean `and `rstd`,
2. out = (x - mean) * rstd, compute in the range [N, C, *],
3. out = out * weight + bias, compute in the range [N, C, *],
The new decomposition:
1. compute `mean `and `rstd`,
2. new_weight = rstd * weight, new_bias = - mean * rstd * weight + bias, compute in the range [N, C],
3. out = out * new_weight + new_bias, compute in the range [N, C, *],
I tested the Inductor performance benchmark with this PR on both CPU and A100. On CPU, two torchbench models(functorch_dp_cifar10 and opacus_cifar10) have about 25% performance improvement, and two diffusion models(Stable Diffusion and Latent Consistency Model(LCM)) have about 2% performance improvement. On A100, no performance gains or regressions were seen.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,785,951,052
|
[MPS] Fix bitwise shifts for uint8
|
pytorchbot
|
closed
|
[
"open source",
"release notes: mps",
"ciflow/mps"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144251
* #144250
* #144249
Previosly all bitwise operations were aliased to the same type, but this is wrong for shift ops
Rather than building an overly complex logic, let's just instantiate using shared `scalarToMetalTypeString` helper function
Fixes https://github.com/pytorch/pytorch/issues/144190
| true
|
2,785,936,123
|
[mps/inductor] Add support for `round()`
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
With this change, inductor/test_view_on_aliased passes.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,785,925,137
|
Revert "Use random64 in Fischer-Yates algorithm for large N (#143682)…
|
kit1980
|
closed
|
[
"release notes: dataloader"
] | 1
|
CONTRIBUTOR
|
… (#143875)"
This reverts commit b1a10ecad96f04db9baff453ae42ef4dd45b62f4.
Fixes #ISSUE_NUMBER
| true
|
2,785,922,646
|
[Perf] Flash-Attn Bwd slow down w/ cutlass 3.6.0 in General
|
drisspg
|
closed
|
[
"high priority",
"triage review",
"module: cuda"
] | 12
|
CONTRIBUTOR
|
# Summary
### Update
In fact appears that 3.6.0 is in general slower than 3.5.1 for FAv2 at its current state:
| Batch Size | Sequence Length | Forward Pass Slowdown | Backward Pass Slowdown |
|------------|----------------|----------------------|----------------------|
| 1 | 128 | 1.51x slower | 1.60x slower |
| 1 | 1024 | 1.17x slower | 1.48x slower |
| 1 | 8192 | 1.00x (same) | 1.17x slower |
| 8 | 128 | 1.56x slower | 1.83x slower |
| 8 | 1024 | 1.25x slower | 1.45x slower |
| 8 | 8192 | 1.01x (same) | 1.17x slower |
| 16 | 128 | 1.46x slower | 1.88x slower |
| 16 | 1024 | 1.28x slower | 1.42x slower |
| 16 | 8192 | 1.02x (same) | 1.14x slower |
Using benchmark here: https://github.com/pytorch/pytorch/blob/main/benchmarks/transformer/sdpa.py
On H100
## Local benchmark
With cutlass version 3.6.0:
https://interncache-all.fbcdn.net/manifold/perfetto-artifacts/tree/ui/index.html#!/?url=https://interncache-all.fbcdn.net/manifold/perfetto_internal_traces/tree/shared_trace/drisspg_23fe031c-9ef6-4527-b212-703c22b0bad6_new.json
Total backwards time: 1.4 ms
With cutlass version 3.5.1:
https://interncache-all.fbcdn.net/manifold/perfetto-artifacts/tree/ui/index.html#!/?url=https://interncache-all.fbcdn.net/manifold/perfetto_internal_traces/tree/shared_trace/drisspg_8d37a9bd-4916-4c4e-8171-474bf2b892a0_old.json
Total backwards time:
0.798 ms
Repro
```Python
import torch
import torch.nn.functional as F
import math
from pathlib import Path
from torch.nn.attention import sdpa_kernel, SDPBackend
from contextlib import contextmanager
@contextmanager
def profiler(
path: Path,
record_shapes: bool = True,
profile_memory: bool = False,
with_stack: bool = False,
):
"""Thin wrapper around torch.profiler
Args:
path: The path to save the trace file to
record_shapes: Record shapes of tensors
profile_memory: Profile memory usage
with_stack: Record stack traces - Blows up memory
Usage:
```
with profiler(Path("trace.json")):
# code to profile
```
"""
path = path.with_suffix(".json")
# make parent dir if it doesn't exist
output_dir = path.parent
output_dir.mkdir(parents=True, exist_ok=True)
def trace_handler(prof) -> None:
prof.export_chrome_trace(path.as_posix())
profiler = torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
],
on_trace_ready=trace_handler,
record_shapes=record_shapes,
profile_memory=profile_memory,
with_stack=with_stack,
)
try:
profiler.start()
yield profiler
finally:
profiler.stop()
@sdpa_kernel(SDPBackend.FLASH_ATTENTION)
def test_scaled_dot_product_attention():
batch_size = 2048
num_heads = 4
seq_len = 128
head_dim = 32
print(f"Dimensions:")
print(f"- Batch size: {batch_size}")
print(f"- Num heads: {num_heads}")
print(f"- Sequence length: {seq_len}")
print(f"- Head dim: {head_dim}")
print(f"\nExpected CUDA grid: ({seq_len // 128}, {batch_size}, {num_heads})")
device = "cuda"
dtype = torch.float16
print(f"\nCreating tensors on {device} with dtype {dtype}")
q = torch.randn(
batch_size, num_heads, seq_len, head_dim, device=device, dtype=dtype, requires_grad=True
)
k = torch.randn(
batch_size, num_heads, seq_len, head_dim, device=device, dtype=dtype, requires_grad=True
)
v = torch.randn(
batch_size, num_heads, seq_len, head_dim, device=device, dtype=dtype, requires_grad=True
)
output = F.scaled_dot_product_attention(
q,
k,
v,
)
out = output.sum()
out.backward()
def main():
with profiler(Path("bwd_perf.json")):
test_scaled_dot_product_attention()
if __name__ == "__main__":
main()
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @ptrblck @eqy
| true
|
2,785,897,200
|
[caffe2][remove dead code] Removed unused zippydb code
|
AishwaryaSivaraman
|
closed
|
[
"fb-exported",
"Stale",
"module: dynamo"
] | 6
|
CONTRIBUTOR
|
Summary:
Edward mentioned ZippyDb Cache is no longer used, removing the zippy db bits.
this leads to 563 reduced transitive deps
Test Plan: CI?
Reviewed By: oulgen
Differential Revision: D68083897
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,785,885,928
|
Register nonzero for meta device for FBLSim
|
lurunming
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 10
|
CONTRIBUTOR
|
Summary:
Fix `nonzero is not registered to meta` issue:
```
"NotImplementedError: aten::nonzero: attempted to run this operator with Meta tensors, but there was no fake impl or Meta kernel registered".
```
Reviewed By: ezyang
Differential Revision: D66525640
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.