id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,929,769,904
|
[ROCm] Fixes and improvements to CUDA->HIP flag conversion for CPP extensions
|
pytorchbot
|
closed
|
[
"module: rocm",
"open source",
"ciflow/rocm"
] | 1
|
COLLABORATOR
|
Fixes https://github.com/ROCm/hip/issues/3764.
Fixes and improvements to CUDA->HIP flag conversion for CPP extensions
- Log flag conversion for debugging purposes.
- Fix cases where it should not touch the -I flags or cases where CUDA appears more than once by replacing only the first instance.
- Fix case where nvcc key may not exist
- Fix case where hipify should ignore flag values and only touch the flag itself
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,929,749,472
|
[export] Handle non OpNamespace type during decomposition.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 6
|
CONTRIBUTOR
|
Summary:
Turns out we can have non OpNamespace object in torch.ops._dir.
We should just throw away those during iteration.
Test Plan: eyes
Differential Revision: D71417992
| true
|
2,929,734,448
|
DISABLED test_get_model_state_dict_del_memory (__main__.TestStateDictMemory)
|
izaitsevfb
|
closed
|
[
"oncall: distributed",
"skipped"
] | 3
|
CONTRIBUTOR
|
Platforms: linux
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22distributed%2Fcheckpoint%2Ftest_state_dict.py%3A%3ATestStateDictMemory%3A%3Atest_get_model_state_dict_del_memory%22%5D)).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @mori360
https://hud.pytorch.org/hud/pytorch/pytorch/b8c0c50bbea1970b5f6eb5b11b08bb093f3a7998/1?per_page=50&name_filter=ull%20%2F%20linux-focal-cuda11.8-py3.10-gcc9%20%2F%20test%20&mergeLF=true
| true
|
2,929,731,322
|
[JIT] fix torchscript mha bias=False
|
Isalia20
|
open
|
[
"triaged",
"open source",
"release notes: jit",
"topic: bug fixes"
] | 3
|
COLLABORATOR
|
Fixes #149391
| true
|
2,929,713,686
|
Enable TMA persistent GEMM Template by default
|
PaulZhang12
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
ghstack-source-id: d95f0938c09704b6658c6ed9f9c9d02cb474d636
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149427
Another attempt to enable the TMA persistent GEMM templates in Inductor, given the availability of Hopper GPUs in the CI.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,929,711,424
|
[DO NOT MERGE] Enable TMA persistent GEMM Template by default
|
PaulZhang12
|
open
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 15
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149427
Previously, this was unable to be landed given there was limited H100 for CI testing. Benchmarking on H100 CI looks good now.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,929,708,611
|
Use mypy 1.15
|
ZainRizvi
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,929,701,579
|
python custom ops tutorial stopped working in PyTorch 2.7 RC1
|
zou3519
|
closed
|
[
"high priority",
"triage review",
"oncall: pt2",
"module: inductor"
] | 5
|
CONTRIBUTOR
|
Get PyTorch 2.7 RC1. Repro in next comment.
Error looks like:
```py
Traceback (most recent call last):
File "/home/rzou/dev/2.7/pco.py", line 124, in <module>
cropped_img = f(img)
^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/pco.py", line 120, in f
@torch.compile(fullgraph=True)
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line
328, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in cal
l_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line
689, in inner_fn
outs = compiled_fn(args)
^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line
495, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_rzou/oy/coy5shd4xlyzvhkrwtaiad5zxz7jhd654636vqhwxsyeux5q27d7.py", line 42, in call
assert_size_stride(buf1, (3, 40, 40), (1600, 40, 1))
AssertionError: expected size 3==3, stride 1==1600 at dim=0; expected size 40==40, stride 120==40 at dim=1; expected s
ize 40==40, stride 3==1 at dim=2
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
```
cc @ezyang @gchanan @kadeng @msaroufim @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,929,643,466
|
[export] refactor DimHints for type errors
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export",
"suppress-bc-linter"
] | 7
|
CONTRIBUTOR
|
Differential Revision: D71414367
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,929,604,084
|
Add regression tests for 3 missing PR-time benchmarks
|
benjaminglass1
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Uses values from the latest PR-time benchmark run on viable/strict. See https://github.com/pytorch/pytorch/actions/runs/13898520615/job/38900894469 for a job showing why this is needed.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,929,589,063
|
Pip-installed pytorch limits threads to 1 when setting GOMP_CPU_AFFINITY (likely due to bundled GOMP)
|
yuchengliu1
|
open
|
[
"high priority",
"module: binaries",
"triaged",
"module: intel"
] | 13
|
NONE
|
### 🐛 Describe the bug
Pip-installed pytorch limits threads to 1 when setting GOMP_CPU_AFFINITY, while a pytorch build from source code will not have this problem. The pip-installed pytorch will use a bundled GOMP.
There is a cpp case can reproduce it.
```
#include <stdio.h>
#include <omp.h>
#include <torch/torch.h>
int main() {
printf("omp_get_max_threads %d\n", omp_get_max_threads());
printf("at::get_num_threads %d\n", at::get_num_threads());
return 0;
}
```
compile command
```g++ -I<PYTHON_INSTALL_DIR>/site-packages/torch/include/torch/csrc/api/include/ -I<PYTHON_INSTALL_DIR>/site-packages/torch/include/ -fopenmp test.cpp -o test.o -L<PYTHON_INSTALL_DIR>/site-packages/torch/lib -ltorch -ltorch_cpu -lc10 -D_GLIBCXX_USE_CXX11_ABI=0```
the result with pip install pytorch

the result with pytorch build from source code

### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease6469.7-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar @atalman @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,929,564,975
|
[caffe2] Do not use --no-as-needed on macOS
|
stepanhruda
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Summary:
`--no-as-needed` is not available in ld64.lld
Applying this on all macos is potentially too broad? I am not sure if `fbcode//mode/mac` uses a different linker, but arvr mode for sure uses ld64.lld.
Test Plan: CI / used for a macOS build on top of the stack.
Differential Revision: D71315125
| true
|
2,929,553,574
|
[dynamic] use maybe_mark_dynamic instead of mark_dynamic for batch size in benchmarks
|
xmfan
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148516
* __->__ #149420
* #149367
* #148694
* #149229
* #149336
in CA, when we capture the backward, the tensor containing the batch dim sometimes is coerced by some mul/matmul etc. with a static shaped tensor, resulting in a guard validation error
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,929,549,264
|
[logging] Add python version to dynamo_compile table
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149419
Summary: This adds a version field like the following: `3.10.9+fb (3.10:1dd9be6, May 4 2022, 01:23:45) [Clang 15.0.7 (mononoke://mononoke.internal.tfbnw.net/fbsource 5d1601b0eed7426ac`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,929,449,775
|
Aborting distributed backend causes segmentation fault in autograd
|
szmigacz
|
closed
|
[
"oncall: distributed",
"triaged",
"module: c10d",
"bug"
] | 10
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Running `torch.distributed.distributed_c10d._abort_process_group` asynchronously from a separate python thread causes segmentation fault from PyTorch autograd. The issue reproduces in `pytorch/pytorch:2.6.0-cuda12.6-cudnn9-devel` container on 8x H100, but I don't think it's hardware-specific since the issue is reported from CPU code. Likely delay `_abort_process_group` is hardware-specific, the issue reproduces if `_abort_process_group` is called when the main thread is calling `loss.backward`.
On 8x H100 the issue reproduces in ~100 `loop_iterations`.
Code to reproduce:
```
import argparse
import threading
import datetime
import os
import random
import torch
def parse_args():
parser = argparse.ArgumentParser(
description='Example',
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
parser.add_argument('--size', default=64, type=int)
parser.add_argument('--layers', default=4, type=int)
parser.add_argument('--log-interval', default=100, type=int)
parser.add_argument('--chkpt-interval', default=100, type=int)
parser.add_argument('--total-iterations', default=1000000, type=int)
parser.add_argument('--seed', default=1234, type=int)
parser.add_argument('--device', default='cuda', choices=['cpu', 'cuda'])
return parser.parse_args()
def abort():
torch.distributed.distributed_c10d._abort_process_group(
torch.distributed.distributed_c10d.GroupMember.WORLD
)
def train(
loop_iteration, base_store, model, opt, backend, device, timeout, args
):
aborted = False
log_interval = args.log_interval
chkpt_interval = args.chkpt_interval
rank = int(os.environ['RANK'])
world_size = int(os.environ['WORLD_SIZE'])
# Create a new Store by adding a prefix based on the current restart
# iteration. PrefixStore wraps the baseline TCPStore which is reused for
# all restart iterations
store = torch.distributed.PrefixStore(str(loop_iteration), base_store)
torch.distributed.distributed_c10d._store_based_barrier(
rank,
store,
'initial',
world_size,
timeout=datetime.timedelta(seconds=60),
)
torch.distributed.init_process_group(
backend,
store=store,
rank=rank,
world_size=world_size,
timeout=timeout,
)
local_rank = int(os.environ['LOCAL_RANK'])
model_ddp = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[local_rank], output_device=local_rank
)
random.seed((args.seed + loop_iteration) * world_size)
fault_iteration = random.randint(1, 10)
random.seed((args.seed + loop_iteration) * world_size + rank)
delay = random.random() / 100
print(f'{rank=} {fault_iteration=} {delay=}')
for iteration in range(args.total_iterations):
# Randomly trigger an example fault
if iteration == fault_iteration and not aborted:
aborted = True
print(f'example fault at {iteration=} from {rank=}')
# abort torch.distributed after a random delay
timer = threading.Timer(
delay,
abort,
)
timer.start()
inp = torch.rand(args.size, args.size).to(device)
model.zero_grad()
out = model_ddp(inp)
loss = out.square().mean()
loss.backward()
opt.step()
loss.item()
if rank == 0 and iteration % log_interval == log_interval - 1:
print(f'{rank=} {iteration=} {loss.item()=}')
def main():
args = parse_args()
print(f'{args}')
local_rank = int(os.environ['LOCAL_RANK'])
if args.device == 'cuda':
torch.cuda.set_device(local_rank)
device = torch.device('cuda')
backend = 'nccl'
timeout = datetime.timedelta(seconds=150)
elif args.device == 'cpu':
device = torch.device('cpu')
backend = 'gloo'
timeout = datetime.timedelta(seconds=10)
else:
raise RuntimeError
# All objects created in ``main()`` are constructed only once, and reused
# for all restart iterations.
if args.seed is not None:
torch.manual_seed(args.seed)
model = torch.nn.Sequential(
*[torch.nn.Linear(args.size, args.size) for _ in range(args.layers)]
).to(device)
opt = torch.optim.Adam(model.parameters(), lr=1e-5)
# TCPStore uses ``(MASTER_PORT + 1)`` to avoid conflicts with TCPStore
# created by ``torch.distributed.run`` and listening on ``MASTER_PORT``,
store = torch.distributed.TCPStore(
host_name=os.environ['MASTER_ADDR'],
port=int(os.environ['MASTER_PORT']) + 1,
world_size=int(os.environ['WORLD_SIZE']),
is_master=(int(os.environ['RANK']) == 0),
multi_tenant=True,
wait_for_workers=True,
use_libuv=True,
)
rank = int(os.environ['RANK'])
loop_iteration = 0
while True:
print(f'Starting {loop_iteration=}')
try:
train(loop_iteration, store, model, opt, backend, device, timeout, args)
except Exception as ex:
print(f'Exception on {rank=} {str(ex)}')
if torch.distributed.is_initialized():
torch.distributed.destroy_process_group()
loop_iteration += 1
if __name__ == '__main__':
main()
```
Attaching sample output, and backtrace.
[backtrace.txt](https://github.com/user-attachments/files/19324896/backtrace.txt)
[output.txt](https://github.com/user-attachments/files/19324872/output.txt)
Backtrace is non-deterministic, I've seen different failures, but so far it always contained `c10d::Reducer`.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1030-nvidia-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2801.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] optree==0.14.0
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.2 py311h5d046bc_0 conda-forge
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,929,419,769
|
[Build] Guard per-op headers in ACLUtils.cpp
|
malfet
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 5
|
CONTRIBUTOR
|
To fix internal build failures, where per-op headers are not generated.
We really should have lint for something like that.
Test Plan: CI
Reviewed By: izaitsevfb
Differential Revision: D71406882
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,929,377,209
|
[Build] Guard per-op headers inclusion
|
malfet
|
closed
|
[
"release notes: build",
"topic: bug fixes"
] | 2
|
CONTRIBUTOR
|
In newly added header, to fix internal build failures, where per-op headers are not used
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,929,344,450
|
Normalize intermediate node names to better utilize cache
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149898
* __->__ #149415
This change was motivated by internal use case (https://fb.workplace.com/groups/1553867532149891/?multi_permalinks=1708481206688522&comment_id=1711739699696006¬if_id=1742399826944239¬if_t=work_feedback_reaction_generic&ref=notif) where we were producing different intermediate node names for the exact same code. This normalization pass does an alpha renaming of intermediate variables so that more isomorphic graphs now result in the same dynamo outputted graph.
We do a normalization pass that effectively ensures that the name indexes monotonically increase. This typically
already happens but in some cases, such as in HOPs, the invariant could be broken without normalization. Below we
show an example where cond previously would have jumped from getitem_3 to get_item_2, but with normalization correctly uses getitem_4 after getitem_3.
We've run this on the same model internally and confirmed with change we now get a cache hit.
| true
|
2,929,308,006
|
torch.Size input
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"fx",
"ciflow/inductor",
"release notes: export"
] | 9
|
CONTRIBUTOR
|
Summary: Support for `torch.Size` inputs was patchy before because `unflatten_fn` for this type returned a tuple. This PR cleans this up.
Fixes #149158
Test Plan: added test
Differential Revision: D71403635
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,929,292,119
|
[CI][docker] Use multistage build for triton
|
clee2000
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Sees to reduce docker pull times by ~3 min if triton is requested, some compressed docker sizes seems to have decreased by 1/3 ish
Also add check that triton is installed/not installed
| true
|
2,929,227,103
|
[AOTI][reland] Update test runner to use the new APIs
|
desertfire
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149412
Summary: Reland https://github.com/pytorch/pytorch/pull/147105. Switch to the newer aoti_compile_and_package APIs. Some tests still kept using legacy APIs, and will follow up with internal test refactoring.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D71470265](https://our.internmc.facebook.com/intern/diff/D71470265)
| true
|
2,929,045,957
|
fix differentiable collectives under inference mode
|
bdhirsh
|
open
|
[
"oncall: distributed",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
The 3 differentiable collectives that exist today are all registered to the autograd key, which means that they won't work with inference mode.
I gave them a separate implementation for the `CompositeExplicitAutograd` key ("below autograd"), where I call their non-differentiable counterparts.
The main annoying bit is that the schemas are slightly different between some of these pairs of collectives (some of the non-differentiable collectives take in non-const args. I did enough to get consistency in this PR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149411
* #149652
* #149514
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,928,998,436
|
Monkeypatch fake mode so it errors on invalid custom ops
|
tugsbayasgalan
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Internal version: [D71294776](https://www.internalfb.com/diff/D71294776)
| true
|
2,928,971,949
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float32 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 7
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float32&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38951400893).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 1: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float32], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float32], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float32], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float32], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float32], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float32], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float32], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float32], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float32], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float32], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float32], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float32], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float32], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float32], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float32], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float32], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float32], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float32], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float32], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float32]], args=(10), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,928,946,215
|
Fix B018 Useless Expressions in Multiple Files (#106571)
|
rocordemu
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)"
] | 8
|
NONE
|
### Description
This PR addresses `flake8-bugbear` `B018` warnings ("Found useless expression") by removing unused tuple and constant expressions in three files. These fixes clean up the codebase, reducing potential confusion and aligning with the linting goals of #106571. As a first-time contributor (coming from Node.js and learning Python), I’m excited to help improve PyTorch’s code quality!
### Changes
- **`torch/_dynamo/variables/ctx_manager.py`**
- **Issue**: `Found useless Tuple expression. Consider either assigning it to a variable or removing it.`
- **Fix**: Removed unnecessary tuple wrapper `(...,)` around a statement, keeping the side-effecting call intact.
- **`torch/_inductor/cudagraph_trees.py`**
- **Issue**: `Found useless Tuple expression. Consider either assigning it to a variable or removing it.`
- **Fix**: Removed unnecessary tuple wrapper `(...,)` around a statement, keeping the side-effecting call intact.
- **`torch/distributed/checkpoint/default_planner.py`**
- **Issue**: `Found useless Constant expression. Consider either assigning it to a variable or removing it.`
- **Fix**: Added a `return` statement before the standalone `True` expression, making it a meaningful return value.
### Details
- **Related Issue**: Fixes #106571
- **Linting Tool**: Verified with `flake8` and `flake8-bugbear`.
- **Testing**: Ran `pytest` locally to ensure no functional changes—only cleanup.
### Notes
Thanks to @spzala, @Skylion007, and @zou3519 for maintaining this awesome project! Any feedback on my fixes or PR process is welcome—I’m here to learn and contribute.
#### FYI
@albanD I am creating a new PR because EasyCLA was failing on the first one.
---
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,928,892,623
|
[MPS] nanmedian implementation
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps"
] | 4
|
COLLABORATOR
|
Implements nanmedian on MPS. This implementation only implements `torch.nanmedian(tensor)` without `keepdim` and `dim`
Will implement nanmedian with dim and keepdim in a followup
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,928,886,230
|
Recompils due to Python float object
|
efsotr
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo"
] | 4
|
NONE
|
### 🐛 Describe the bug
```python
import os
os.environ["TORCH_LOGS"] = "recompiles_verbose"
import torch
x = torch.randn((10, 10), device="cuda", requires_grad=False)
@torch.compile(dynamic=True)
def model(x, y):
return x * y
y = model(x, 1.5)
y2 = model(x, 2.5)
```
### Error logs
Just log
```
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] Recompiling function model in /tmp/ipykernel_2002586/874691697.py:9
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] triggered by the following guard failure(s):
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] guard 0 failures:
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] - 6/0: L['y'] == 1.5
```
### Versions
torch 2.5.1
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,928,877,933
|
[torch.compile] Recompils due to Python float object
|
efsotr
|
closed
|
[] | 0
|
NONE
|
### 🐛 Describe the bug
```python
import os
os.environ["TORCH_LOGS"] = "recompiles_verbose"
import torch
x = torch.randn((10, 10), device="cuda", requires_grad=False)
@torch.compile(dynamic=True)
def model(x, y):
return x * y
y = model(x, 1.5)
y2 = model(x, 2.5)
```
```
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] Recompiling function model in /tmp/ipykernel_2002586/874691697.py:9
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] triggered by the following guard failure(s):
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] guard 0 failures:
V0318 23:04:20.093000 2002586 site-packages/torch/_dynamo/guards.py:2811] [6/1] [__recompiles_verbose] - 6/0: L['y'] == 1.5
```
### Versions
torch 2.5.1
| true
|
2,928,842,380
|
A bunch of typos
|
macleginn
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Improves readability.
| true
|
2,928,798,650
|
Fix broken build within xplat/caffe2
|
malfet
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: build",
"topic: improvements"
] | 9
|
CONTRIBUTOR
|
Summary:
Following a pull from open source, the build within xplat is broken
due to not finding <autograd/function.h>.
Within the python_function.cpp there seems to be a convention of using the
torch/csrc prefix.
This change includes that prefix to enable the build to proceed.
Test Plan:
Build a binary using torch.
https://www.internalfb.com/buck2/83122485-d3c3-43f4-97b4-81bb90450b3b
Unit tests run too
https://www.internalfb.com/intern/testinfra/testrun/13229323975828416
Further testing in CI and elsewise expected.
Reviewed By: malfet
Differential Revision: D70331539
| true
|
2,928,741,801
|
Release.md readability improvements
|
ZainRizvi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Improves a bunch of readability/grammatical issues with release.md.
Note: This was a claude code experiment, with all changes automatically generated. But turns out minor edits like this is _not_ a good use of claude code since it asked for approval on every single changed line. Prob way more efficient to toss this entire thing into a simple LLM.
| true
|
2,928,632,031
|
[AOTI] Forward fix unit test failures
|
desertfire
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149401
Summary: There is a land conflict between https://github.com/pytorch/pytorch/pull/149161 and https://github.com/pytorch/pytorch/pull/147105. We just need to update the APIs used in two new unit tests.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,928,617,951
|
[Sigmoid] Remove magic method in CapabilityBasedPartitioner
|
StellarrZ
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 4
|
CONTRIBUTOR
|
Summary: As title.
Test Plan: CI
Differential Revision: D70575197
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,928,571,297
|
`empty_strided` Causes Silent Insistency In Inductor
|
WLFJ
|
closed
|
[
"triaged",
"oncall: pt2",
"topic: fuzzer"
] | 4
|
NONE
|
### 🐛 Describe the bug
In most cases, no one would intentionally use an uninitialized tensor created with `empty_strided` to build a model, but Inductor's results differ from Eager's.
Repro:
```python
import torch
print(torch.__version__)
def f(*args):
sym_0, sym_1 = args
var_406 = torch.empty_strided(size=sym_0, stride=sym_1)
# print("BREAK")
var_314 = torch.arccos(var_406)
return torch.special.expm1(var_314), var_406
def eager_check(var_406):
var_314 = torch.arccos(var_406)
return torch.special.expm1(var_314)
res, input = f((8,), (16,),)
print('eager: is same?', torch.allclose(res, eager_check(input), equal_nan=True))
res, input = torch.compile(f)((8,), (16,),)
print('inductor: is same?', torch.allclose(res, eager_check(input), equal_nan=True))
```
Running result:
```
2.8.0.dev20250317+cu128
eager: is same? True
inductor: is same? False
```
If uncomment `print` to break the graph, now inductor result same as eager:
```
2.8.0.dev20250317+cu128
BREAK
eager: is same? True
BREAK
inductor: is same? True
```
### Error logs
No error log. Please feel free to ask me for more information.
### Versions
PyTorch 2.8.0.dev20250317+cu128
cc @chauhang @penguinwu
| true
|
2,928,406,533
|
Fix mtia_extension.cpp setDevice() to correctly set current_device
|
ileixe
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10
|
CONTRIBUTOR
|
We referred to this code and found that there was a minor bug. Fix for future reference for others.
| true
|
2,928,404,163
|
[xnnpack] Expose subgraph symbols
|
stepanhruda
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Summary: Main XNNPack target code uses symbols from subgraph so they need to be exported - this gets uncovered on macos where symbols were not visible after linking
Test Plan: CI / used for a macOS build on top of the stack.
Differential Revision: D71315023
| true
|
2,928,295,011
|
Does FSDP support nested wrapping for MoE models with Expert Parallelism?
|
zigzagcai
|
closed
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 10
|
NONE
|
Hi,
I am trying to use FSDP with Expert Parallelism to tackle with training MoE models, which size is quite large (670B DeepSeek v3 for example). Since even we use fully sharded options , we will encounter CUDA OOM during training. The root cause is per-layer parameter size is quite large. Therefore we implement Expert Parallelism.
However, the process group for MoE part (Expert Parallelism) and non-MoE part is not the same. So we need to wrap MoE part and non-MoE part separately. The detailed information of FSDP+ EP can be found here: https://github.com/pytorch/pytorch/issues/114361
I tried to wrap the model according to [the suggestion](https://github.com/pytorch/pytorch/issues/114361#issuecomment-1824694162) from @awgu
```
ignored_mod = []
for layer_id, layer in enumerate(model.layers):
if layer_id >= config.first_k_dense_replace:
layer.feed_forward.moe_layer.experts = FSDP(
layer.feed_forward.moe_layer.experts,
process_group=expert_data_process_group,
sharding_strategy=ShardingStrategy.FULL_SHARD,
forward_prefetch=True,
backward_prefetch=BackwardPrefetch.BACKWARD_PRE,
limit_all_gathers=True,
use_orig_params=True,
)
ignored_mod.append(layer.feed_forward.moe_layer.experts)
model = FSDP(
module=model,
process_group=data_process_group,
sharding_strategy=ShardingStrategy.FULL_SHARD,
auto_wrap_policy=ModuleWrapPolicy(wrap_cls),
forward_prefetch=True,
backward_prefetch=BackwardPrefetch.BACKWARD_PRE,
limit_all_gathers=True,
use_orig_params=True,
ignored_modules=ignored_mod,
)
```
But it seems that FSDP cannot support nested wrapping with two process_groups. (one for non-MoE parts and another one for MoE experts )
```
File "/blahblah/zigzagcai/.conda/envs/my_dev_env/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 483, in __init__
_auto_wrap(
File "/blahblah/zigzagcai/.conda/envs/my_dev_env/lib/python3.10/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 45, in _auto_wrap
_check_nested_wrapping(root_module)
File "/blahblah/zigzagcai/.conda/envs/my_dev_env/lib/python3.10/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 107, in _check_nested_wrapping
raise ValueError(
ValueError: FSDP auto wrapping requires modules to not already have FSDP applied but found model.layers.1.feed_forward.moe_layer.experts in
```
And I cannot even put the wrapped InnerFSDP modules in the ignore_modules list, when we tried to materialize outerFSDP module.
```
File "/blahblah/zigzagcai/.conda/envs/my_dev_env/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 442, in __init__
_init_ignored_module_states(self, module, ignored_modules, ignored_states)
File "/blahblah/zigzagcai/.conda/envs/my_dev_env/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py", line 314, in _init_ignored_module_states
state._ignored_modules = _get_ignored_modules(module, ignored_modules)
File "/blahblah/zigzagcai/.conda/envs/my_dev_env/lib/python3.10/site-packages/torch/distributed/fsdp/_init_utils.py", line 697, in _get_ignored_modules
raise ValueError("`ignored_modules` should not include FSDP modules")
ValueError: `ignored_modules` should not include FSDP modules
```
Then I check with the FSDP source code, and I found the above assertion is on the relaxation TODO list:
https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/_init_utils.py#L680-L683
https://github.com/pytorch/pytorch/blob/main/torch/distributed/fsdp/_wrap_utils.py#L43-L45
So I removed the two assertions and the training runs successfully. So, the question is does FSDP support nested warpping, that is:
(1) Firstly, we wrap MoE expert part with `expert_data_process_group`, and put the wrapped expert parts into the `ignored_modules`
(2) Then, we wrap the non-MoE part with `data_process_group`.
Does my implementation right for this case since the two assertion is removed?
Thanks in advance if anybody could provide some insights!
cc
@awgu @zhaojuanmao @rohan-varma @liangluofb @fegin @lessw2020 @mrshenli @penguinwu @kwen2501
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360 @kwen2501 @c-p-i-o
| true
|
2,928,120,786
|
[FSDP2] DTensors are always marked as cpu tensor when we use offload_to_cpu
|
cyr0930
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
If offload_to_cpu == True and is fully_shard mode,
DTensor of modules always marked as cpu tensor (just view, I think)
https://github.com/pytorch/pytorch/blob/v2.6.0/torch/distributed/fsdp/_fully_shard/_fsdp_param.py#L383
DTensors are moved to cpu in the above line, but never get back to gpu, and error occurs.
```Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!```
There should be some logic that recover it somewhere in all_gather function I think
### Versions
2.6.0 and main at the moment
| true
|
2,927,929,502
|
[WIP]Enabling running HPU test through run_test.py
|
AnantGulati
|
open
|
[
"open source",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
The purpose of this PR is to facilitate the use of run_test.py for executing PyTorch unit tests on HPU
Changes made:
- run_test.py: enables us to test for HPU supported tests by passing argument --hpu
- common_utils.py: enables adding skips for expected failures
- hpu_test_faliures.py: Enables us to pick x-fails and skips
Skipped and expected failure file list will be kept locally and loaded with the HPU environment
| true
|
2,927,833,151
|
Torch RPC examples from docs say usage is deprecated.
|
vaughankraska
|
open
|
[
"oncall: distributed",
"module: docs",
"triaged",
"module: rpc"
] | 3
|
NONE
|
### 🐛 Describe the bug
When running any examples from the pytorch/examples repo or more importantly the examples from the RPC documentation, the following warning is displayed:
`UserWarning: You are using a Backend <class 'torch.distributed.distributed_c10d.ProcessGroupGloo'> as a ProcessGroup. This usage is deprecated since PyTorch 2.0. Please use a public API of PyTorch Distributed instead.`
For example I get the above warning when running [The Simple End-To-End example here](https://pytorch.org/docs/stable/rpc/distributed_autograd.html):
```python
import torch
import torch.multiprocessing as mp
import torch.distributed.autograd as dist_autograd
from torch.distributed import rpc
from torch import optim
from torch.distributed.optim import DistributedOptimizer
def random_tensor():
return torch.rand((3, 3), requires_grad=True)
def _run_process(rank, dst_rank, world_size):
name = "worker{}".format(rank)
dst_name = "worker{}".format(dst_rank)
# Initialize RPC.
rpc.init_rpc(
name=name,
rank=rank,
world_size=world_size
)
# Use a distributed autograd context.
with dist_autograd.context() as context_id:
# Forward pass (create references on remote nodes).
rref1 = rpc.remote(dst_name, random_tensor)
rref2 = rpc.remote(dst_name, random_tensor)
loss = rref1.to_here() + rref2.to_here()
# Backward pass (run distributed autograd).
dist_autograd.backward(context_id, [loss.sum()])
# Build DistributedOptimizer.
dist_optim = DistributedOptimizer(
optim.SGD,
[rref1, rref2],
lr=0.05,
)
# Run the distributed optimizer step.
dist_optim.step(context_id)
def run_process(rank, world_size):
dst_rank = (rank + 1) % world_size
_run_process(rank, dst_rank, world_size)
rpc.shutdown()
if __name__ == '__main__':
# Run world_size workers
world_size = 2
mp.spawn(run_process, args=(world_size,), nprocs=world_size)
```
What is the status of pytorch RPC? [This posts says](https://discuss.pytorch.org/t/warning-when-using-rpc/198009) it is mostly unmaintained and the above warning holds with that but nothing in the docs say otherwise.
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: EndeavourOS Linux (x86_64)
GCC version: (GCC) 14.2.1 20250207
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.41
Python version: 3.12.9 (main, Feb 12 2025, 14:50:50) [Clang 19.1.6 ] (64-bit runtime)
Python platform: Linux-6.13.7-zen1-1-zen-x86_64-with-glibc2.41
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 960
Nvidia driver version: 570.124.04
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.7.0
/usr/lib/libcudnn_adv.so.9.7.0
/usr/lib/libcudnn_cnn.so.9.7.0
/usr/lib/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/libcudnn_graph.so.9.7.0
/usr/lib/libcudnn_heuristic.so.9.7.0
/usr/lib/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz
CPU family: 6
Model: 165
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 66%
CPU max MHz: 5200,0000
CPU min MHz: 800,0000
BogoMIPS: 7200,00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 320 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 2,5 MiB (10 instances)
L3 cache: 20 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] No relevant packages
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @svekars @sekyondaMeta @AlannaBurke @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @jjlilley @osalpekar @jiayisuse @mrzzd
| true
|
2,927,788,963
|
Fix `SequentialLR` deprecate warning about invoke `step(epoch)`
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"release notes: optim"
] | 4
|
CONTRIBUTOR
|
Fixes #116776
## Changes
- Refactor `LRScheduler.step` method, leave `epoch` check logic in public method `step`
- Move update `lr` logic to `_update_lr` method
- Make `SequentialLR` use `_update_lr` to avoid unnecessary warning message
## Test Result
```bash
pytest test/optim/test_lrscheduler.py -vv
```

| true
|
2,927,634,520
|
Cannot torch.jit.script nn.MultiheadAttention when bias is set to False
|
CloudyDory
|
open
|
[
"oncall: jit"
] | 1
|
NONE
|
### 🐛 Describe the bug
When we run `torch.jit.script` to `nn.MultiheadAttention` with `bias=False`, the following error occurs:
```
import torch
import torch.nn as nn
layer = nn.MultiheadAttention(128, 8, bias=False)
# layer = nn.Linear(128, 128, bias=False)
layer_jit = torch.jit.script(layer)
```
```
RuntimeError:
'NoneType' object has no attribute or method 'dtype'.:
File "/home/user/miniconda3/lib/python3.12/site-packages/torch/nn/modules/activation.py", line 1250
# they don't!
why_not_fast_path = "non-self attention was used (query, key, and value are not the same Tensor)"
elif self.in_proj_bias is not None and query.dtype != self.in_proj_bias.dtype:
~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
why_not_fast_path = f"dtypes of query ({query.dtype}) and self.in_proj_bias ({self.in_proj_bias.dtype}) don't match"
elif self.in_proj_weight is None:
```
Changing `bias` back to `True` solves this error. Setting the bias of other layers (such as `Linear`) does not produce such error.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 5881.0000
CPU min MHz: 545.0000
BogoMIPS: 8982.53
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.2
[pip3] numpydoc==1.7.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 2.2.2 py312h2809609_0
[conda] numpy-base 2.2.2 py312he1a6c75_0
[conda] numpydoc 1.7.0 py312h06a4308_0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,927,631,934
|
[Docs] Make `torch.Library`'s `kind` have no default value to be consistent with the code
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: docs"
] | 8
|
CONTRIBUTOR
|
Fixes #149389
| true
|
2,927,625,772
|
[Docs] `torch.Library`'s `kind` is inconsistent with the code
|
shink
|
closed
|
[
"triaged",
"actionable",
"module: library"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The doc says that `kind` defaults to `IMPL` but it actually does not.
<img width="821" alt="Image" src="https://github.com/user-attachments/assets/2eb7b65a-d642-4a13-b111-edc43080b3a0" />
Calling `torch.library.Library("fsdp")` will get this:
```
TypeError: Library.__init__() missing 1 required positional argument: 'kind'
```
### Versions
main
cc @anjali411 @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,927,605,033
|
[Windows][inductor] fix blank space break windows file path
|
xuhancn
|
closed
|
[
"module: windows",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 13
|
COLLABORATOR
|
Fixes #149310
From origin error message:
```cmd
Command:
cl /I C:/Program Files/Python310/Include /I c:/code/.env/lib/site-packages/torch/include /I c:/code/.env/lib/site-packages/torch/include/torch/csrc/api/include /I c:/code/.env/lib/site-packages/torch/include/TH /I c:/code/.env/lib/site-packages/torch/include/THC /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/user/AppData/Local/Temp/torchinductor_user/ou/coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.cpp /LD /FeC:/Users/user/AppData/Local/Temp/torchinductor_user/ou/coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.pyd /link /LIBPATH:c:/code/.env/Scripts/libs /LIBPATH:c:/code/.env/lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib
Output:
Microsoft (R) C/C++ Optimizing Compiler Version 19.43.34809 for x86
Copyright (C) Microsoft Corporation. All rights reserved.
cl : Command line warning D9025 : overriding '/openmp' with '/openmp:experimental'
cl : Command line warning D9024 : unrecognized source file type 'Files/Python310/Include', object file assumed
coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.cpp
C:/Users/user/AppData/Local/Temp/torchinductor_user/ou/coubnfnqsm2gbdzdytufv46jotd6sxsnnhgldiw45pl5yjq5nbvz.cpp(21): fatal error C1083: Cannot open include file: 'Python.h': No such file or directory
```
Python installed in `C:/Program Files/Python310` path, and the blank space break the file path.
Solution:
Add quotes to declare Windows file paths, after that:
```cmd
cl /I "C:/Users/Xuhan/.conda/envs/new_build/Include" /I "C:/Users/Xuhan/.conda/envs/new_build/lib/site-packages/torch/include" /I "C:/Users/Xuhan/.conda/envs/new_build/lib/site-packages/torch/include/torch/csrc/api/include" /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /D CPU_CAPABILITY_AVX512 /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/Xuhan/AppData/Local/Temp/tmp1wsj0m8r/za/czarp3ly5c22ge3hydvnzvad4cjimyr3hkwvofodxqffgil7frfd.cpp /arch:AVX512 /FeC:/Users/Xuhan/AppData/Local/Temp/tmp1wsj0m8r/za/czarp3ly5c22ge3hydvnzvad4cjimyr3hkwvofodxqffgil7frfd.pyd /LD /link /LIBPATH:"C:/Users/Xuhan/.conda/envs/new_build/libs" /LIBPATH:"C:/Users/Xuhan/.conda/envs/new_build/lib/site-packages/torch/lib" "torch.lib" "torch_cpu.lib" "torch_python.lib" "sleef.lib"
```
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,927,486,447
|
[aoti] follow up to use new api in `test_provenance_tracing.py`
|
YUNQIUGUO
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Summary:
As title. Follow up of D71181284. and some minor refactoring
Context : D69609685 (update test runner to use new api) / https://github.com/pytorch/pytorch/pull/147105
Test Plan:
```
buck2 run -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=h100 @//mode/opt fbcode//caffe2/test/inductor:provenance_tracing -- -r test_triton_kernel_to_post_grad_tracing_cpu
```
Differential Revision: D71375725
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,927,383,675
|
Add AOTI shim for _weight_int4pack_mm_cpu_tensor (#149031)
|
Xia-Weiwen
|
closed
|
[
"open source",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
**Summary**
Previous implementation of shim did not align with the design and it was removed by https://github.com/pytorch/pytorch/pull/148907 This PR adds it back in the files of MKLDNN backend and re-enable the CPP wrapper UT.
**Test plan**
```
pytest -s test/inductor/test_cpu_cpp_wrapper.py -k test_woq_int4
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149031
Approved by: https://github.com/leslie-fang-intel, https://github.com/EikanWang, https://github.com/desertfire
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,927,334,153
|
Fix preload for cusparseLT
|
vihangm
|
closed
|
[
"triaged",
"open source",
"topic: bug fixes",
"topic: not user facing"
] | 11
|
NONE
|
This was added in #144477 but the preload logic was wrong since the missing `nvidia` path would trigger the `continue` in the loop and never search in the alternate location.
This is easily reproduced when trying to use torch 2.6.0 in a hermetic bazel build.
After this patch, torch manages to find and load cusparseLt properly.
Signed-off-by: Vihang Mehta <vihang@gimletlabs.ai>
| true
|
2,927,323,171
|
Fix local compilication and hipification
|
zoranzhao
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
MEMBER
|
Summary:
As title, we need to fix the issue introduced from
https://github.com/pytorch/pytorch/pull/148305
Test Plan: CI and e2e https://docs.google.com/document/d/1Bu-MxJCkN7WaRkKJLVBQvnSp8yV0v3Aeb3Y9R5sjeHw/edit?tab=t.0
Differential Revision: D71373001
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,927,247,376
|
Reuse format_size utils
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #138222
* #149601
* __->__ #149383
| true
|
2,927,130,270
|
Warn user of existing lock file to avoid infinite waiting
|
chaihahaha
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Sometimes the python script didn't exit normally and the lock file remains in the path. In this case, the `file_baton.py` may sleep forever waiting for the lock file to release. This PR will add a warning to show the existing lock file path, let the user better understand which file to delete when the waiting time is too long.
| true
|
2,927,077,303
|
Update xla pin
|
zpcore
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Update xla pin to fix the github test failure issue. [failure link](https://hud.pytorch.org/failure?name=pull+%2F+linux-focal-py3_9-clang9-xla+%2F+test+%28xla%2C+1%2C+1%2C+lf.linux.12xlarge%29&jobName=linux-focal-py3_9-clang9-xla+%2F+test+%28xla%2C+1%2C+1%2C+lf.linux.12xlarge%29&failureCaptures=%5B%22test_call_jax_pytree%22%2C%22TestJaxInterop%22%5D).
The test is run the torch_xla jax test but install the jax/jaxlib dependencies as we did in https://github.com/pytorch/xla/pull/8781/files.
| true
|
2,927,065,279
|
[ROCm] Use alternate mirror for drm repo
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
Fixes issue with building ROCm manywheel and libtorch images eg. https://github.com/pytorch/pytorch/actions/runs/13887711267/job/38854659005#step:4:8328
```
#53 2.832 Cloning into 'drm'...
#53 2.849 fatal: unable to access 'https://gitlab.freedesktop.org/mesa/drm.git/': The requested URL returned error: 503
#53 2.851 ./install_rocm_drm.sh: line 29: pushd: drm: No such file or directory
```
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,927,010,157
|
[MPS/inductor] Add support for `modified_bessel_i1`.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,945,848
|
[MPS] Add `bicubic2d_aa`
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149378
Which is currently the most frequently requested op in https://github.com/pytorch/pytorch/issues/141287
Mostly done by refactoring `upsample_bilinear2d_aa` to accept Functor as one of the template arguments, which closely ideas from https://github.com/python-pillow/Pillow/blob/eec43cfbc0c9962af2b728677d1d011b311584db/src/libImaging/Resample.c as well as
https://github.com/pytorch/pytorch/blob/bb42e4d1374828ba417fa252d2bcac2f07d368e8/aten/src/ATen/native/cuda/UpSampleBilinear2d.cu#L472-L478
Populate unit tests by copying upsample_bilinear_2d_aa and reusing it as upsample_bicubic2d_aa
At that point, only difference between upsample_bilinear2d_aa and upsample_bicubic2d_aa are convolution kernel function and size: for bilinear it's 3x3, for bicubic it's 5x5
| true
|
2,926,942,262
|
[ONNX] Update types in VerificationInfo
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: docs"
] | 9
|
COLLABORATOR
|
torch.types.Number was rendered as is in the documentation and can be confusing. We write the original types instead to reduce confusion for users.
| true
|
2,926,903,262
|
Update torch-xpu-ops commit pin
|
chunhuanMeng
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/xpu",
"release notes: xpu"
] | 1
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [fac0cf0118f3bc82fac4be46fb358546dd191f44](https://github.com/intel/torch-xpu-ops/commit/fac0cf0118f3bc82fac4be46fb358546dd191f44), includes:
- Fix torch xpu build workflow logic
- Refine XCCL build option
- Align python executable to PyTorch
- Ensure conditional setting of `AOT_TARGETS` and add `none` option to skip AOT compilation
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,926,873,145
|
[ONNX] Expose verification utilities
|
pytorchbot
|
closed
|
[
"open source",
"release notes: onnx"
] | 1
|
COLLABORATOR
|
Expose verification utilities to public documentation.
- https://github.com/pytorch/pytorch/pull/132530
- https://github.com/pytorch/pytorch/pull/149377
| true
|
2,926,865,889
|
[PrivateUse1] Allow out-of-tree devices to pass check when validating csr tensor args
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: sparse"
] | 24
|
CONTRIBUTOR
|
Fixes #149303
Fllow-up: #147306
Because we have a dispatch key named `DispatchKey::SparseCsrPrivateUse1` for this case, we allow users to create a csr tensor on out-of-tree devices, so we should also let that pass the check.
| true
|
2,926,862,538
|
[Inductor-CPU] Faster int8 WoQ GEMM for small M with explicit prefetching and different outer loops
|
sanchitintel
|
open
|
[
"triaged",
"open source",
"ciflow/trunk",
"topic: performance",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 14
|
COLLABORATOR
|
### Summary
Fixes #148494
Explicitly prefetch the cache lines of the next `B` block to accelerate int8 WoQ (BF16 activation, int8 statically quantized weights) GEMM for small `M` dimension.
Some of this code (outer loops of the GEMM) is being ported over from Intel Extension for PyTorch. The macro-kernel* and the micro-kernel* are essentially the same, but optionally prefetch a block of B. Templatization is being used to prevent branching causing a slowdown due to unnecessary prefetching.
\* - in [BLIS](https://dl.acm.org/doi/10.1145/2764454) parlance
### Performance data with BS 1
Machine: 32 cores of one socket of a Intel Xeon SP Gen 5 machine
| Model | input tokens | output tokens | next-token latency before this PR | Next-token latency after this change | Speedup |
|-----------|-------------|-----------------|--------------------------------------|------------------------------------------|-----------|
|GPT-J | 128 | 128 | 42 ms | 38 ms | 9.52 % |
| GPT-J | 1024 | 1024 | 48 ms | 45 ms | 6.25 % |
|LLaMA 3.1 8B Instruct | 128 | 128 | 52 ms | 47 ms| 9.61% |
|LLaMA 3.1 8B Instruct | 1024 | 1024 | 57 ms | 53 ms| 7.01% |
While the input shapes of GEMMs corresponding to linear for next-token computation remain the same in case of different number of input & output tokens, the difference in next-token latency is due to attention for those cases
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,841,542
|
[Inductor][CPP] rename shim_mkldnn.h/.cpp to shim_cpu.h/.cpp
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149372
**Summary**
Previous discussion is here: https://github.com/pytorch/pytorch/pull/148907#issuecomment-2712795600
Rename these files because
- they may hold mkldnn-unrelated code for CPU
- filenames are aligned with files for CUDA and XPU
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,832,278
|
test if free chunk
|
laithsakka
|
open
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149371
| true
|
2,926,799,917
|
UNSTABLE pull / cuda12.4-py3.10-gcc9-sm75 / test (pr_time_benchmarks)
|
malfet
|
closed
|
[
"module: ci",
"triaged",
"oncall: pt2",
"unstable"
] | 4
|
CONTRIBUTOR
|
See https://hud.pytorch.org/hud/pytorch/pytorch/main/1?per_page=50&name_filter=pr_time&mergeLF=true <- job passes and fails intermittently with no apparent commit that could have started it
cc @chauhang @penguinwu @seemethere @pytorch/pytorch-dev-infra
| true
|
2,926,798,771
|
[ROCm] Enable several fsdp related UTs
|
pragupta
|
closed
|
[
"oncall: distributed",
"module: rocm",
"triaged",
"open source",
"Merged",
"topic: not user facing",
"ciflow/periodic"
] | 6
|
CONTRIBUTOR
|
Enabling 26 UTs for ROCm in the following files:
- distributed._shard.sharded_optim.test_sharded_optim - 2 UTs
- distributed._shard.sharded_tensor.ops.test_binary_cmp - 4 UTs
- distributed._shard.sharded_tensor.ops.test_init - 3 UTs
- distributed._shard.sharded_tensor.ops.test_embedding - 2 UTs
- distributed._shard.sharded_tensor.ops.test_embedding_bag - 2 UTs
- distributed._composable.test_replicate_with_compiler - 4 UTs
- distributed._composable.fsdp.test_fully_shard_grad_scaler - 1 UTs
- distributed.tensor.test_attention - 4 UTs
- distributed.tensor.test_matrix_ops - 1 UTs
- distributed.tensor.test_tensor_ops - 1 UTs
- distributed.fsdp.test_fsdp_grad_acc - 2 UTs
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,926,767,855
|
[MPS] Implement support for `modified_bessel_i1` in eager.
|
dcci
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: mps",
"release notes: mps"
] | 6
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,926,758,816
|
[ca] fix accumulate grad polyfill when different strides between param and grad
|
xmfan
|
closed
|
[
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149367
* #148516
* #149642
* #149641
* #149229
Optimizers assume param and grad must have same layout, which is enforced by the AccumulateGrad node. We could instead argue that optimizers should handle param/grad having different strides.
FIXES https://github.com/pytorch/pytorch/issues/127922 and some benchmarks
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,740,052
|
Register flop formulas for flex attention
|
carmocca
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Addresses https://pytorch.slack.com/archives/C3PDTEV8E/p1742212622454339
| true
|
2,926,739,490
|
[MPS/BE] Remove decorator that skipped test on macOS 12.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 3
|
MEMBER
|
macOS 12 is not really supported anymore.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,674,704
|
[AOTI] Add num_runners to AOTIModelPackageLoader
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149364
Summary: AOTIModelContainerRunner takes a num_runners argument for multi-threaded inference, but AOTIModelPackageLoader forgot to take the same parameter, although its run() API already expects to take an optional cudaStream_t parameter for multi-threaded inference.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D71357418](https://our.internmc.facebook.com/intern/diff/D71357418)
| true
|
2,926,659,854
|
[MPS/BE] @parametrize generation of pointwise_ops.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
Make this less error prone/reduces duplication.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,657,572
|
Add x86-simd-sort accelerated sorting
|
sterrettm2
|
open
|
[
"module: cpu",
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 6
|
CONTRIBUTOR
|
This is a new pull request for the same feature as #127936; the [issue](https://github.com/pytorch/pytorch/issues/140590) affecting that patch [has been resolved](https://github.com/pytorch/pytorch/pull/127936#issuecomment-2686374580). That patch is still closed and doesn't seem to be getting responses, so hopefully to get more attention I'm submitting this new patch; please tell me if this is a problem.
This patch adds x86-simd-sort as a submodule to accelerate sorting for 32-bit and 64-bit datatypes when AVX2 or AVX512 are available.
For contiguous data, this can be over a 10x speedup for large arrays. For discontiguous data, it can give over a 4x speedup with larger arrays. These benchmarks were gathered on a Skylake system (7900x), limited to 8 threads.
<details>
<summary><b>Contiguous Benchmarks</b></summary>
```
float32, normally distributed (in microseconds)
size Default AVX2 AVX512 Default/AVX2 Default/AVX512
16 7.150844336 6.886271477 7.132277489 1.038420335 1.002603214
128 9.208030939 8.478154898 7.846915245 1.086089019 1.173458697
1024 37.79037627 23.60707456 16.44122627 1.600807257 2.298513241
10000 714.7355628 203.9921844 105.5683001 3.503739934 6.770361577
100000 8383.074408 721.6333354 465.3709247 11.61680593 18.01374766
1000000 97124.31945 5632.054572 3920.148401 17.24491803 24.77567416
10000000 1161974.907 86070.48988 71533.82301 13.50027063 16.24371323
int32_t, uniformly distributed (in microseconds)
size Default AVX2 AVX512 Default/AVX2 Default/AVX512
16 7.203208685 6.92212224 7.014458179 1.040606975 1.026908779
128 8.972388983 8.195516348 7.592543125 1.094792396 1.18173698
1024 32.77489477 23.6874548 15.36617105 1.383639359 2.132925285
10000 607.8824128 193.3402024 99.25090471 3.144107667 6.124703997
100000 523.9384684 608.1836536 442.3166784 0.861480682 1.184532472
1000000 5211.348627 5271.598405 3518.861883 0.988570871 1.480975611
10000000 133853.6263 81463.05084 67852.97394 1.643120714 1.972700952
```
</details>
Note that the int32_t sort is accelerated by FBGEMM's radix sort for larger arrays, but this only handles contiguous data and in one sorting direction.
<details>
<summary><b>Discontiguous Benchmarks</b></summary>
```
float, normal distributed, discontiguous in sorted dimension (in microseconds)
size Default AVX2 AVX512 Default/AVX2 Default/AVX512
16 3.836543679 4.011214256 3.84376061 0.956454439 0.99812243
128 5.755310194 5.755723127 4.820394962 0.999928257 1.193949923
1024 49.46946019 24.78790785 15.47874362 1.995709379 3.195960952
10000 665.2505291 236.6165959 143.9490662 2.811512551 4.621429974
100000 4328.002203 1329.001212 818.3516414 3.256582586 5.288682743
1000000 47651.5018 16693.72045 11827.39551 2.854456677 4.028909133
10000000 556655.1288 236252.6258 184215.9828 2.356185998 3.021752621
int32_t, uniformly distributed, discontiguous in sorted dimension (in microseconds)
size Default AVX2 AVX512 Default/AVX2 Default/AVX512
16 3.817994356 3.878117442 3.770039797 0.984496837 1.012719908
128 5.578731397 5.577152082 4.716770534 1.000283176 1.182743862
1024 43.3412619 23.61275801 14.55446819 1.835501887 2.977866408
10000 634.3997478 224.4322851 133.9518324 2.826686667 4.736028889
100000 4084.358152 1292.363303 781.7867576 3.16037924 5.22438902
1000000 46262.20465 16608.35284 11367.51817 2.785478192 4.06968381
10000000 541231.9104 235185.1861 180249.9294 2.301301028 3.002674742
```
</details>
And single threaded performance on the same 7900x system.
<details>
<summary><b>Single Core Performance</b></summary>
```
float32, normally distributed (in microseconds)
size default avx2 avx512 Default/AVX2 Default/AVX512
16 7.113132954 7.125889063 6.855771542 0.998209892 1.03753938
128 9.120340586 8.584395647 7.56901145 1.06243246 1.204957959
1024 36.27155249 24.53012899 15.79697341 1.478653149 2.296107714
10000 711.9155329 200.382199 108.2926268 3.552788305 6.573998194
100000 8399.78071 2366.537676 1330.463447 3.54939657 6.313424639
1000000 100915.9743 28517.82126 17892.53366 3.538698604 5.640116497
10000000 1204376.316 372791.338 258797.0257 3.230698231 4.653748678
int32_t, uniformly distributed (in microseconds)
size default avx2 avx512 Default/AVX2 Default/AVX512
16 6.839853764 6.9264884 6.681355715 0.987492272 1.023722438
128 8.356203556 8.445468426 7.25971818 0.989430442 1.151036907
1024 30.88020962 23.73411948 14.40595382 1.30108933 2.143572721
10000 598.6316072 191.3458307 99.9496872 3.128532276 5.989329471
100000 1971.655619 2248.225125 1253.185778 0.87698318 1.57331471
1000000 24533.7907 27625.80853 16539.86351 0.888075029 1.483312766
10000000 361025.8579 358125.9727 248421.4783 1.008097389 1.453279565
float, normal distributed discontiguous in sorted dimension (in microseconds)
size default avx2 avx512 Default/AVX2 Default/AVX512
16 3.9883219 3.897530437 3.803153276 1.023294613 1.048688183
128 5.797074333 5.687333666 4.795829393 1.019295627 1.208774095
1024 49.77498938 25.21366438 16.05679234 1.974127546 3.099933556
10000 670.7694155 244.0156184 145.6871839 2.748879026 4.604175863
100000 8045.512319 2731.892052 1707.214788 2.945033027 4.712653836
1000000 96954.93258 32101.35607 21151.68938 3.020275292 4.583791433
10000000 1159710.248 427844.882 316131.2342 2.710585769 3.668445642
int32_t, uniformly distributed discontiguous in sorted dimension (in microseconds)
size default avx2 avx512 Default/AVX2 Default/AVX512
16 3.780948997 3.872428179 3.718787193 0.97637679 1.016715612
128 5.341575543 5.529783332 4.779936273 0.965964708 1.117499322
1024 39.1874838 23.01476824 15.89414877 1.702710337 2.465528942
10000 555.9280075 225.5575979 137.2813291 2.464683135 4.049552922
100000 6663.585735 2620.158211 1609.420934 2.543199761 4.140362284
1000000 79281.4539 31679.51566 20372.97304 2.502609407 3.891501439
10000000 961423.1586 417279.1243 305512.3885 2.304028893 3.146920369
```
</details>
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @leslie-fang-intel @r-devulap
| true
|
2,926,654,593
|
[tp] change test_layer_norm_bwd_req_grad test to avoid uneven TP sharding which causes timeout
|
XilunWu
|
open
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #148943
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149361
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,926,654,567
|
[EZ][Docker] Remove `install_db.sh`
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149360
Which is a vestige of caffe2 days and was no-op since https://github.com/pytorch/pytorch/pull/125092
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,926,635,406
|
[Inductor-CPU] Fix int8 WoQ AMX micro-kernel when `block_n` is 16 or 48
|
sanchitintel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
### Summary
When the block-size for `N` dimension is `48` for the AMX GEMM micro-kernel for int8 WoQ (BF16 activation, int8 statically quantized weights), the logic for handling the tail is incorrect - we can't always dequantize 32 elements of weights at a time because we may need to dequantize `32` followed by `16` when `block_n` is `48` (for each `K`).
This PR fixes that logic, which was initially exposed with `M=17, N=1024, K=1024`.
This PR also fixes the case of `block_n` being 16.
I had introduced [this bug ](https://github.com/pytorch/pytorch/commit/ca9813ea1498ad907abf5dc1cf20c83a1973969a) after misreading GEMM blockings as `["block_m", "block_k", "block_n"]` instead of `["block_m", "block_n", "block_k"]` (so I had wrongly assumed that `block_n` was always 32).
### Future work
While this PR simply fixes a bug, it's possible to optimize the code pertaining to dequantizing & caching the B buffer - for `block_n` being `16` or `48`, `K` would always be a multiple of 2, so `K * block_n` will always be a multiple of 32. Since `dequantized_B_buf` stores rows contiguously, when `block_n` would be `16` or `48`, we could store 32 BF16 elements at a time instead of storing `16` at a time (when `block_n` is 16), or `32` followed by `16` at a time (when `block_n` is 48). Such an optimization would lower `register -> memory` data movements.
cc @jgong5 @mingfeima @XiaobingSuper @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,618,449
|
[dynamo] Add mem leak test
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149358
Test for https://github.com/pytorch/pytorch/pull/148480
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,926,615,896
|
[ROCm][TunableOp] Minor fix to BLAS logging for ScaledGEMM with no bias vector.
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
Omit the bias type argument for BLAS logging when there is a ScaledGEMM with no bias vector.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,926,606,905
|
Make numpy check optional
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
We may want to skip numpy smoke tests. Hence making it optional
| true
|
2,926,570,422
|
[dtensor][tp] debug test_layer_norm_bwd_req_grad timeout when #GPU=3
|
XilunWu
|
open
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149355
### Summary
test_layer_norm_bwd_req_grad has timeout failure when the model dimensions cannot be evenly divided by #GPUs.
### Traige
Run `CUDA_VISIBLE_DEVICES="0,1,2" pytest test/distributed/tensor/test_math_ops.py -s -k test_layer_norm_bwd_req_grad` to reproduce the timeout.
Using `py-spy` to find where the program is stuck shows:
```
Thread 2998997 (active): "MainThread"
convert (torch/nn/modules/module.py:1344)
_apply (torch/nn/modules/module.py:942)
_apply (torch/nn/modules/module.py:915)
to (torch/nn/modules/module.py:1355)
test_layer_norm_bwd_req_grad (tensor/test_math_ops.py:501)
wrapper (torch/testing/_internal/distributed/_tensor/common_dtensor.py:407)
wrapper (torch/testing/_internal/common_utils.py:3153)
wrapper (torch/testing/_internal/common_distributed.py:607)
run_test (torch/testing/_internal/common_distributed.py:734)
_run (torch/testing/_internal/common_distributed.py:713)
run (multiprocessing/process.py:108)
_bootstrap (multiprocessing/process.py:314)
_main (multiprocessing/spawn.py:135)
spawn_main (multiprocessing/spawn.py:122)
<module> (<string>:1)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,926,566,552
|
[State_dict] Remove functools.cache and add unit test
|
mori360
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)"
] | 8
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/149100
@functools.cache would keep 'self' alive, leading to unexpected memory performance. (e.g. in the issue linked, if the model is deleted, the model's memory is still occupied.)
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,926,531,221
|
xpu: improve error handling and reporting in XPU cmake files
|
dvrogozh
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"ciflow/xpu",
"release notes: xpu"
] | 5
|
CONTRIBUTOR
|
For #149075
* Add a graceful cmake error instead of cryptic one if SYCL runtime is not found:
```
The link interface of target "c10_xpu" contains:
torch::xpurt
but the target was not found.
```
* Suppress unclear cmake error if SYCL compiler is not available and further version query fails:
```
CMake Error at /home/dvrogozh/pytorch/torch/share/cmake/Caffe2/FindSYCLToolkit.cmake:37 (string):
string sub-command REGEX, mode REPLACE needs at least 6 arguments total to
command.
```
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5
| true
|
2,926,501,829
|
[GPU Snapshot] Add Clear History Flag
|
sraikund16
|
closed
|
[
"module: cuda",
"enhancement",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 12
|
CONTRIBUTOR
|
Summary:
Oftentimes, users complain that a bunch of extra events are prepended to their desired GPU snapshot. This is because they usually attach an OOM logger without knowing and when they go to collect the actual snapshot, it adds all the OOM logger contents. Since OOM and regular snapshot use the same backend, we currently don't have the infra in place to split these snapshots.
As a solution we add a flag to the snapshot frontend to clear out the history when starting the auto-trace record memory history.
A more thorough solution would be to have a user pass in a handle and to have snapshots per handle to seperate the events. However, this would likely be complicated and more work than it is worth as we would have to change the callbacks in the caching allocator and pass these objects between python and cpp.
Test Plan:
See diff below
Differential Revision: D71159720
cc @ptrblck @msaroufim @eqy
| true
|
2,926,472,903
|
nccl: upgrade to 2.26.2 to avoid hang on ncclCommAbort
|
d4l3k
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 7
|
MEMBER
|
Fixes #149153
Yaml generated from:
```
python .github/scripts/generate_ci_workflows.py
```
Test plan:
Repro in https://gist.github.com/d4l3k/16a19b475952bc40ddd7f2febcc297b7
```
rm -rf third_party/nccl
python setup.py develop
```
| true
|
2,926,451,167
|
cpp_wrapper: precompile a few more commonly used headers, and improve RAIIPyObject interface
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td",
"ciflow/rocm-mi300"
] | 17
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147225
* __->__ #149350
Add includes for torch.device, torch.dtype, torch.layout, and torch.memory_format to the cpp_wrapper common header, so that they get precompiled. Additionally, add move constructors and operator bool to RAIIPyObject.
Closes #142005.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,449,933
|
DISABLED test_unshard_async (__main__.TestFullyShardUnshardMultiProcess)
|
pytorch-bot[bot]
|
open
|
[
"oncall: distributed",
"triaged",
"module: flaky-tests",
"skipped",
"module: fsdp",
"oncall: pt2"
] | 2
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_unshard_async&suite=TestFullyShardUnshardMultiProcess&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38913895101).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_unshard_async`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 899, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 0 terminated or timed out after 300.0343186855316 seconds
```
</details>
Test file path: `distributed/_composable/fsdp/test_fully_shard_comm.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @zhaojuanmao @mrshenli @rohan-varma @chauhang @penguinwu
| true
|
2,926,403,912
|
[Partition] Fix flaky
|
BoyuanFeng
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Output buffer names may change sometimes leading to a flaky error. This fix removes the hardcoded output name in unit test.
Fixes #148957
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,393,244
|
refresh benchmarks results.
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149347
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,926,339,141
|
Dummy test
|
jamesjwu
|
closed
|
[
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149346
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,286,245
|
Narrow scope of clangtidy lintrunner on CI to match lintrunner configs
|
TovlyFB
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary: `--all-files` for the CLANGTIDY lint is too broad, leading it to produce errors on files that should not be linted like `.cuh` files (see [discussion in PR 148936](https://github.com/pytorch/pytorch/pull/148936)). This PR narrows the scope to respect the include and exclude patterns in the `.lintrunner.toml` config.
Test Plan:
1. Apply these changes to D70539649 and [export it to a PR](https://github.com/pytorch/pytorch/pull/148936)
2. Observe that the PR doesn't have any linter errors ([other errors on there are already being looked at](https://fb.workplace.com/groups/pytorch.edge.users/permalink/1721728545364098/) and are separate)
Differential Revision: D71335488
| true
|
2,926,264,997
|
[test] build dist
|
clee2000
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,926,246,464
|
[c10d] Add a collective time estimator for NCCL comms
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149343
We want to upstream the feature from new nccl for users to estimate comm time.
Resolves #147753
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,926,218,457
|
[MPS] Add inductor support for `modified_bessel_i0`.
|
dcci
|
closed
|
[
"Merged",
"topic: improvements",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,209,915
|
[DCP][Draft] Checkpoint daemon process fixes
|
MeetVadakkanchery
|
open
|
[
"oncall: distributed",
"fb-exported",
"release notes: distributed (checkpoint)"
] | 2
|
CONTRIBUTOR
|
Differential Revision: D71336180
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,926,208,734
|
[MTIA] Add _mtia_maybeExchangeDevice to MTIA module
|
PatriceVignola
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary: The FlexAttention path uses `_maybe_exchange_device`, so it will be needed eventually for MTIA as well.
Test Plan: `buck2 test fbcode//mtia/host_runtime/torch_mtia/tests:test_torch_mtia_api -- test_maybe_exchange_device`
Reviewed By: chaos5958
Differential Revision: D70072063
| true
|
2,926,140,021
|
[Inductor] Improve memory locality by iterating over y dimension before x
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15
|
CONTRIBUTOR
|
# Feature
Fixes https://github.com/pytorch/pytorch/issues/148718 by reordering the tensor dims to `(z, y, x)`.
As a bonus refactor, block pointers no longer needed the `reorder=True` argument to `self.active_range_trees()`. Since this argument is no longer used anywhere, this PR simply deletes it as opposed to updating the logic for the new iteration order.
# Perf impact
It looks like there's a decent perf bump on A100, with cudagraphs enabled. Granted, perf runs seem to have some noise between commits. ([Workflow run](https://github.com/pytorch/pytorch/actions/runs/13914815576).)
Training (all neutral or positive):

Inference (one positive, one very small negative):

As reported in https://github.com/pytorch/pytorch/issues/148718, this PR makes consecutive threads access consecutive memory addresses. This should theoretically give the GPU more opportunities to coalesce loads and stores. From Nvidia's [kernel profiling guide](https://docs.nvidia.com/nsight-compute/ProfilingGuide/index.html):
> Local memory is private storage for an executing thread and is not visible outside of that thread. It is intended for thread-local data like thread stacks and register spills. Local memory addresses are translated to global virtual addresses by the AGU unit. Local memory has the same latency as global memory. One difference between global and local memory is that local memory is arranged such that consecutive 32-bit words are accessed by consecutive thread IDs. Accesses are therefore fully coalesced as long as all threads in a warp access the same relative address (e.g., same index in an array variable, same member in a structure variable, etc.).
I couldn't find any information on how coalescing works for other kinds of memory, but the guide mentions it is also supported for accesses to the L2 cache.
> The L2 Request Coalescer (LRC) processes incoming requests for L2 and tries to coalesce read requests before forwarding them to the L2 cache. It also serves programmatic multicast requests from the SM and supports compression for writes.
The [answer to this Stack Overflow post](https://stackoverflow.com/a/5044424) also explains coalescing in a straightforward way. Inductor's current iteration order corresponds to the first (uncoalesced) example in that answer, while the order after this PR corresponds to the second (coalesced) example.
Besides GPUs, this order of accessing data is highly advantageous for systems relying on DMAs, as those are designed to access contiguous spans of memory. This change improves the performance of an elementwise add kernel on an internal model, using internal hardware, by 1.76x. I will share the details with reviewers who are Meta employees via a private channel.
# Test plan
- Updated expected code on CI tests.
- Added a new test checking the {x,y,z}indices and block pointers on a 3D pointwise kernel.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,095,481
|
Use enum to select floating point format in FbgemmEmbedding APIs
|
MatzeB
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing"
] | 17
|
CONTRIBUTOR
|
Summary:
Most FBGemmEmbedding APIs currently feature a `bool is_bf16_out` parameter to differentiate between the float16 and bfloat16 format when the output array has type `uint16_t`.
I am in the process of adding E5M2 and E4M3FN formats for output arrays with type `uint8_t`. Instead of adding another parameter, I would like to change the `bool is_bf16_out` parameter to `enum FloatFormat` to make it easier to add new formats:
```
enum class FloatFormat {
DEFAULT,
FLOAT16,
BFLOAT16,
FP8_E5M2,
FP8_E4M3FN,
};
```
Test Plan: sandcastle
Reviewed By: excelle08
Differential Revision: D68046358
| true
|
2,926,080,999
|
Update nightly s390x builds
|
AlekseiNikiforovIBM
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 3
|
COLLABORATOR
|
This change should fix new nightly build failures for s390x.
| true
|
2,926,064,974
|
[ca] fix dce for side-effects
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148516
* #149420
* #149367
* #148694
* #149229
* __->__ #149336
The AOT backward could have contained side effectful ops, so we can't DCE them. Have CA also call the default fx.Node.is_impure which will cover some of the existing cases
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,926,055,271
|
torch.matrix_exp gets stuck on GPU
|
jiren-the-gray
|
open
|
[
"needs reproduction",
"module: cuda",
"triaged",
"module: deadlock",
"module: linear algebra"
] | 1
|
NONE
|
### 🐛 Describe the bug
Running `torch.matrix_exp` with [a tensor](https://drive.google.com/file/d/1_BP6SZMKbQqMJ1nikaKrhuGneUsrjAE-/view?usp=sharing) works on CPU but gets stuck on GPU. I am providing a [colab](https://colab.research.google.com/drive/1RLd1q35-xHHANfu7YqLBu69Uv6gONROk?usp=sharing) with a code snippet to reproduce the problem using `concurrent.futures`, but I initially encountered it without this code snippet. This is just to demonstrate with a timeout that it gets stuck, and the code remains stuck even after the thread is attempted to be killed. It looks like the CUDA version encounters some sort of race condition. To run with colab, please upload [this file](https://drive.google.com/file/d/1_BP6SZMKbQqMJ1nikaKrhuGneUsrjAE-/view?usp=sharing) to the files tab first.
Minimal reproducible code:
```python
import torch, sys
from safetensors import safe_open
import concurrent.futures
tensors = {}
with safe_open("matrix_exp.safetensors", framework="pt", device='cpu') as f:
for k in f.keys():
tensors[k] = f.get_tensor(k)
def matrix_exp_with_timeout(tensor, timeout=10):
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(torch.matrix_exp, tensor)
try:
result = future.result(timeout=timeout)
print("Executed successfully")
return result
except concurrent.futures.TimeoutError:
print("Matrix exponential operation took too long and was terminated.")
future.cancel()
sys.exit(1)
timeout = 10 # seconds
print(tensors['tensor'].shape, tensors['tensor'].dtype) # torch.Size([3, 224, 224]) torch.float32
out_cpu = matrix_exp_with_timeout(tensors['tensor'], timeout=timeout) # Executed successfully
out_gpu = matrix_exp_with_timeout(tensors['tensor'].cuda(), timeout=timeout) # Timeout, still stuck
```
To run locally, download the [safetensors file](https://drive.google.com/file/d/1_BP6SZMKbQqMJ1nikaKrhuGneUsrjAE-/view?usp=sharing) and keep alongside the code.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.38
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.5.3.2
[pip3] nvidia-cuda-cupti-cu12==12.5.82
[pip3] nvidia-cuda-nvrtc-cu12==12.5.82
[pip3] nvidia-cuda-runtime-cu12==12.5.82
[pip3] nvidia-cudnn-cu12==9.3.0.75
[pip3] nvidia-cufft-cu12==11.2.3.61
[pip3] nvidia-curand-cu12==10.3.6.82
[pip3] nvidia-cusolver-cu12==11.6.3.83
[pip3] nvidia-cusparse-cu12==12.5.1.3
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] optree==0.14.1
[pip3] pynvjitlink-cu12==0.5.2
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
cc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano
| true
|
2,925,946,182
|
NUMA Binding Integration with torchrun
|
raghavhrishi
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"release notes: distributed (c10d)"
] | 5
|
NONE
|
Implements #148689
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,925,925,146
|
[Profiler/Easy] Pass Overload Names To Kineto
|
sraikund16
|
closed
|
[
"enhancement",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler"
] | 9
|
CONTRIBUTOR
|
Summary: Right now we get Overload names and forward them to the Event List frontend for profiler but we do not forward anything to kineto. This diff checks if there is an overload name for each cpu op and appends it to the name if necessary
Test Plan: Added test in CI
Differential Revision: D71326670
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.