id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,764,105,070
|
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
|
ruidazeng
|
closed
|
[
"module: binaries",
"triaged",
"module: python version"
] | 2
|
NONE
|
### 🐛 Describe the bug
```console
ruidazeng@Ruidas-Laptop demo % pip3 install torch
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
ruidazeng@Ruidas-Laptop demo % pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
Looking in indexes: https://download.pytorch.org/whl/cpu
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
```
Issue #75534 was not helpful, none of the fixes worked.
### Versions
```console
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.30.5
Libc version: N/A
Python version: 3.13.0 (main, Oct 7 2024, 05:02:14) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit-Mach-O
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==2.1.3
[conda] Could not collect
```
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,764,097,499
|
[BE] Add stride check in `torch.max_pool1d()`
|
shink
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"Stale"
] | 6
|
CONTRIBUTOR
|
Fixes #142454
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,764,095,264
|
Fix grad_scaler for MPS, which doesn't support FP64
|
masc-it
|
closed
|
[
"triaged",
"open source",
"module: amp (automated mixed precision)",
"Stale"
] | 4
|
NONE
|
- remove fp64 intermediate cast if on mps device
Original error:
```
scaler.unscale_(optimizer)
File "..../lib/python3.10/site-packages/torch/amp/grad_scaler.py", line 335, in unscale_
inv_scale = self._scale.double().reciprocal().float()
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
```
I don't know if this is the correct way to do it, just trying to escalate this annoying error for mps folks.
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5
| true
|
2,764,071,912
|
Torch compile scaled_dot_product_attention NAN
|
keilsmart
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: inductor"
] | 3
|
NONE
|
### 🐛 Describe the bug
```
def test_sdpa():
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, q, k, v):
x = torch.nn.functional.scaled_dot_product_attention(q, k, v)
return x
model = Model().cuda().eval()
q = torch.randn(2, 12, 2980, 128, device=torch.device('cuda'), dtype=torch.bfloat16)
k = torch.randn(2, 12, 2980, 128, device=torch.device('cuda'), dtype=torch.bfloat16)
v = torch.randn(2, 12, 2980, 128, device=torch.device('cuda'), dtype=torch.bfloat16)
model = torch.compile(model, dynamic=True)
with torch.no_grad():
output = model(q, k, v)
print(output)
test_sdpa()
```
When I compile the model with argument dynamic=True, it will produce NAN results. However, if I set dynamic=False, the result is OK. Then I run this code with TORCH_LOGS="+dynamo,output_code", I found the output code set the scale to nan: buf0 = torch.ops.aten._scaled_dot_product_flash_attention.default(arg6_1, arg5_1, arg4_1, scale=nan).
### Error logs
```
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] def call(args):
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1 = args
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] args.clear()
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] s0 = arg0_1
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] s1 = arg1_1
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] s2 = arg2_1
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] s3 = arg3_1
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] assert_size_stride(arg4_1, (s0, s1, s2, s3), (s1*s2*s3, s2*s3, s3, 1))
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] assert_size_stride(arg5_1, (s0, s1, s2, s3), (s1*s2*s3, s2*s3, s3, 1))
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] assert_size_stride(arg6_1, (s0, s1, s2, s3), (s1*s2*s3, s2*s3, s3, 1))
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] with torch.cuda._DeviceGuard(0):
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] torch.cuda.set_device(0)
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] # Topologically Sorted Source Nodes: [x], Original ATen: [aten._scaled_dot_product_flash_attention]
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] buf0 = torch.ops.aten._scaled_dot_product_flash_attention.default(arg6_1, arg5_1, arg4_1, scale=nan)
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] del arg4_1
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] del arg5_1
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] del arg6_1
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] buf1 = buf0[0]
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] del buf0
V1231 15:24:57.025000 2461995 torch/_inductor/codecache.py:1130] [0/0] [__output_code] return (buf1, )
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.134-16.3.al8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
Nvidia driver version: 555.42.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) Platinum 8475B
BIOS Model name: Intel(R) Xeon(R) Platinum 8475B
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 5400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.0.dev20241023+cu124
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,764,055,445
|
[Inductor] Support parallel reduction for GroupNorm
|
jiayisunx
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144020
Summary:
Support parallel reduction for GroupNorm by optimizing the parallelization heuristics: When the range of the first inner loop is much larger than the range of all outer loops, change the starting depth of parallelization to the first inner loop.
I tested the Inductor benchmark with this PR on CPU. One torchbench model(pytorch_CycleGAN_and_pix2pix) achieved ~45% performance improvement, and two diffusion models(Stable Diffusion and Latent Consistency Model(LCM)) achieved ~2% performance improvement.
Example:
```
import torch
import torch.nn as nn
class GN(nn.Module):
def __init__(self, num_groups, num_channels):
super(GN, self).__init__()
self.gn = nn.GroupNorm(num_groups, num_channels)
def forward(self, x):
return self.gn(x)
x = torch.randn(2, 64, 168, 168).to(memory_format=torch.channels_last)
m = GN(2, 64).eval()
compiled_m = torch.compile(m)
with torch.no_grad():
out = compiled_m(x)
```
Generated code:
- Before:
```
cpp_fused_native_group_norm_0 = async_compile.cpp_pybinding(['const float*', 'const float*', 'const float*', 'float*', 'float*', 'float*', 'float*', 'float*'], '''
#include "/tmp/torchinductor_jiayisun/pi/cpicxudqmdsjh5cm4klbtbrvy2cxwr7whxl3md2zzdjdf3orvfdf.h"
extern "C" void kernel(const float* in_ptr0,
const float* in_ptr1,
const float* in_ptr2,
float* out_ptr0,
float* out_ptr1,
float* out_ptr2,
float* out_ptr3,
float* out_ptr4)
{
#pragma omp parallel num_threads(56)
{
int tid = omp_get_thread_num();
{
#pragma omp for collapse(2)
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(2L); x0+=static_cast<int64_t>(1L))
{
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(2L); x1+=static_cast<int64_t>(1L))
{
{
Welford<float> tmp_acc0 = Welford<float>();
Welford<at::vec::Vectorized<float>> tmp_acc0_vec = Welford<at::vec::Vectorized<float>>();
Welford<at::vec::Vectorized<float>> masked_tmp_acc0_vec = Welford<at::vec::Vectorized<float>>();
static WeightRecp<at::vec::Vectorized<float>> wrecps0(static_cast<int64_t>(56448L));
for(int64_t x2=static_cast<int64_t>(0L); x2<static_cast<int64_t>(28224L); x2+=static_cast<int64_t>(1L))
{
for(int64_t x3=static_cast<int64_t>(0L); x3<static_cast<int64_t>(32L); x3+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x3 >= static_cast<int64_t>(0) && x3 < static_cast<int64_t>(32L)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x3 + 32L*x1 + 64L*x2 + 1806336L*x0), static_cast<int64_t>(16));
tmp_acc0_vec = welford_combine(tmp_acc0_vec, tmp0, &wrecps0);
}
}
}
}
tmp_acc0 = welford_combine(tmp_acc0, welford_vec_reduce_all(masked_tmp_acc0_vec));
tmp_acc0 = welford_combine(tmp_acc0, welford_vec_reduce_all(tmp_acc0_vec));
out_ptr0[static_cast<int64_t>(x1 + 2L*x0)] = static_cast<float>(tmp_acc0.mean);
out_ptr1[static_cast<int64_t>(x1 + 2L*x0)] = static_cast<float>(tmp_acc0.m2);
}
}
}
}
#pragma omp single
{
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(2L); x0+=static_cast<int64_t>(1L))
{
#pragma GCC ivdep
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(2L); x1+=static_cast<int64_t>(1L))
{
for(int64_t x2=static_cast<int64_t>(0L); x2<static_cast<int64_t>(32L); x2+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x2 >= static_cast<int64_t>(0) && x2 < static_cast<int64_t>(32L)))
{
auto tmp0 = out_ptr1[static_cast<int64_t>(x1 + 2L*x0)];
auto tmp6 = at::vec::Vectorized<float>::loadu(in_ptr1 + static_cast<int64_t>(x2 + 32L*x1), static_cast<int64_t>(16));
auto tmp9 = out_ptr0[static_cast<int64_t>(x1 + 2L*x0)];
auto tmp13 = at::vec::Vectorized<float>::loadu(in_ptr2 + static_cast<int64_t>(x2 + 32L*x1), static_cast<int64_t>(16));
auto tmp1 = static_cast<float>(903168.0);
auto tmp2 = tmp0 / tmp1;
auto tmp3 = static_cast<float>(1e-05);
auto tmp4 = decltype(tmp2)(tmp2 + tmp3);
auto tmp5 = 1 / std::sqrt(tmp4);
auto tmp7 = at::vec::Vectorized<float>(tmp5);
auto tmp8 = tmp7 * tmp6;
auto tmp10 = decltype(tmp9)(-tmp9);
auto tmp11 = at::vec::Vectorized<float>(tmp10);
auto tmp12 = tmp11 * tmp8;
auto tmp14 = tmp12 + tmp13;
tmp8.store(out_ptr2 + static_cast<int64_t>(x2 + 32L*x1 + 64L*x0));
tmp14.store(out_ptr3 + static_cast<int64_t>(x2 + 32L*x1 + 64L*x0));
}
}
}
}
}
}
}
{
#pragma omp for collapse(2)
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(2L); x0+=static_cast<int64_t>(1L))
{
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(28224L); x1+=static_cast<int64_t>(1L))
{
for(int64_t x2=static_cast<int64_t>(0L); x2<static_cast<int64_t>(64L); x2+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x2 >= static_cast<int64_t>(0) && x2 < static_cast<int64_t>(64L)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x2 + 64L*x1 + 1806336L*x0), static_cast<int64_t>(16));
auto tmp1 = at::vec::Vectorized<float>::loadu(out_ptr2 + static_cast<int64_t>(x2 + 64L*x0), static_cast<int64_t>(16));
auto tmp3 = at::vec::Vectorized<float>::loadu(out_ptr3 + static_cast<int64_t>(x2 + 64L*x0), static_cast<int64_t>(16));
auto tmp2 = tmp0 * tmp1;
auto tmp4 = tmp2 + tmp3;
tmp4.store(out_ptr4 + static_cast<int64_t>(x2 + 64L*x1 + 1806336L*x0));
}
}
}
}
}
}
}
}
''')
```
- After:
```
cpp_fused_native_group_norm_0 = async_compile.cpp_pybinding(['const float*', 'const float*', 'const float*', 'float*', 'float*', 'float*', 'float*', 'float*'], '''
#include "/tmp/torchinductor_jiayisun/pi/cpicxudqmdsjh5cm4klbtbrvy2cxwr7whxl3md2zzdjdf3orvfdf.h"
extern "C" void kernel(const float* in_ptr0,
const float* in_ptr1,
const float* in_ptr2,
float* out_ptr0,
float* out_ptr1,
float* out_ptr2,
float* out_ptr3,
float* out_ptr4)
{
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(2L); x0+=static_cast<int64_t>(1L))
{
#pragma GCC ivdep
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(2L); x1+=static_cast<int64_t>(1L))
{
{
Welford<float> tmp_acc0 = Welford<float>();
Welford<at::vec::Vectorized<float>> tmp_acc0_vec = Welford<at::vec::Vectorized<float>>();
Welford<at::vec::Vectorized<float>> masked_tmp_acc0_vec = Welford<at::vec::Vectorized<float>>();
Welford<at::vec::Vectorized<float>> tmp_acc0_vec_arr[56];
for (int i = 0; i < 56; i++)
{
tmp_acc0_vec_arr[i] = Welford<at::vec::Vectorized<float>>();
}
Welford<float> tmp_acc0_arr[56];
for (int i = 0; i < 56; i++)
{
tmp_acc0_arr[i] = Welford<float>();
}
Welford<at::vec::Vectorized<float>> masked_tmp_acc0_vec_arr[56];
for (int i = 0; i < 56; i++)
{
masked_tmp_acc0_vec_arr[i] = Welford<at::vec::Vectorized<float>>();
}
#pragma omp parallel num_threads(56)
{
int tid = omp_get_thread_num();
static WeightRecp<at::vec::Vectorized<float>> wrecps0(static_cast<int64_t>(1008L));
Welford<at::vec::Vectorized<float>> tmp_acc0_vec_local = Welford<at::vec::Vectorized<float>>();
Welford<float> tmp_acc0_local = Welford<float>();
Welford<at::vec::Vectorized<float>> masked_tmp_acc0_vec_local = Welford<at::vec::Vectorized<float>>();
#pragma omp for
for(int64_t x2=static_cast<int64_t>(0L); x2<static_cast<int64_t>(28224L); x2+=static_cast<int64_t>(1L))
{
for(int64_t x3=static_cast<int64_t>(0L); x3<static_cast<int64_t>(32L); x3+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x3 >= static_cast<int64_t>(0) && x3 < static_cast<int64_t>(32L)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x3 + 32L*x1 + 64L*x2 + 1806336L*x0), static_cast<int64_t>(16));
tmp_acc0_vec_local = welford_combine(tmp_acc0_vec_local, tmp0, &wrecps0);
}
}
}
}
tmp_acc0_vec_arr[tid] = tmp_acc0_vec_local;
tmp_acc0_arr[tid] = tmp_acc0_local;
masked_tmp_acc0_vec_arr[tid] = masked_tmp_acc0_vec_local;
}
for (int tid = 0; tid < 56; tid++)
{
tmp_acc0_vec = welford_combine(tmp_acc0_vec, tmp_acc0_vec_arr[tid]);
}
for (int tid = 0; tid < 56; tid++)
{
tmp_acc0 = welford_combine(tmp_acc0, tmp_acc0_arr[tid]);
}
for (int tid = 0; tid < 56; tid++)
{
masked_tmp_acc0_vec = welford_combine(masked_tmp_acc0_vec, masked_tmp_acc0_vec_arr[tid]);
}
tmp_acc0 = welford_combine(tmp_acc0, welford_vec_reduce_all(masked_tmp_acc0_vec));
tmp_acc0 = welford_combine(tmp_acc0, welford_vec_reduce_all(tmp_acc0_vec));
out_ptr0[static_cast<int64_t>(x1 + 2L*x0)] = static_cast<float>(tmp_acc0.mean);
out_ptr1[static_cast<int64_t>(x1 + 2L*x0)] = static_cast<float>(tmp_acc0.m2);
}
}
}
}
{
#pragma GCC ivdep
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(2L); x0+=static_cast<int64_t>(1L))
{
#pragma GCC ivdep
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(2L); x1+=static_cast<int64_t>(1L))
{
for(int64_t x2=static_cast<int64_t>(0L); x2<static_cast<int64_t>(32L); x2+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x2 >= static_cast<int64_t>(0) && x2 < static_cast<int64_t>(32L)))
{
auto tmp0 = out_ptr1[static_cast<int64_t>(x1 + 2L*x0)];
auto tmp6 = at::vec::Vectorized<float>::loadu(in_ptr1 + static_cast<int64_t>(x2 + 32L*x1), static_cast<int64_t>(16));
auto tmp9 = out_ptr0[static_cast<int64_t>(x1 + 2L*x0)];
auto tmp13 = at::vec::Vectorized<float>::loadu(in_ptr2 + static_cast<int64_t>(x2 + 32L*x1), static_cast<int64_t>(16));
auto tmp1 = static_cast<float>(903168.0);
auto tmp2 = tmp0 / tmp1;
auto tmp3 = static_cast<float>(1e-05);
auto tmp4 = decltype(tmp2)(tmp2 + tmp3);
auto tmp5 = 1 / std::sqrt(tmp4);
auto tmp7 = at::vec::Vectorized<float>(tmp5);
auto tmp8 = tmp7 * tmp6;
auto tmp10 = decltype(tmp9)(-tmp9);
auto tmp11 = at::vec::Vectorized<float>(tmp10);
auto tmp12 = tmp11 * tmp8;
auto tmp14 = tmp12 + tmp13;
tmp8.store(out_ptr2 + static_cast<int64_t>(x2 + 32L*x1 + 64L*x0));
tmp14.store(out_ptr3 + static_cast<int64_t>(x2 + 32L*x1 + 64L*x0));
}
}
}
}
}
}
#pragma omp parallel num_threads(56)
{
int tid = omp_get_thread_num();
{
#pragma omp for collapse(2)
for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(2L); x0+=static_cast<int64_t>(1L))
{
for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(28224L); x1+=static_cast<int64_t>(1L))
{
for(int64_t x2=static_cast<int64_t>(0L); x2<static_cast<int64_t>(64L); x2+=static_cast<int64_t>(16L))
{
{
if(C10_LIKELY(x2 >= static_cast<int64_t>(0) && x2 < static_cast<int64_t>(64L)))
{
auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x2 + 64L*x1 + 1806336L*x0), static_cast<int64_t>(16));
auto tmp1 = at::vec::Vectorized<float>::loadu(out_ptr2 + static_cast<int64_t>(x2 + 64L*x0), static_cast<int64_t>(16));
auto tmp3 = at::vec::Vectorized<float>::loadu(out_ptr3 + static_cast<int64_t>(x2 + 64L*x0), static_cast<int64_t>(16));
auto tmp2 = tmp0 * tmp1;
auto tmp4 = tmp2 + tmp3;
tmp4.store(out_ptr4 + static_cast<int64_t>(x2 + 64L*x1 + 1806336L*x0));
}
}
}
}
}
}
}
}
''')
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,764,028,942
|
Remove unused setDataLoaderSignalHandlers
|
cyyever
|
closed
|
[
"module: dataloader",
"triaged",
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
setDataLoaderSignalHandlers isn't used in repositories under the PyTorch organization.
cc @andrewkho @divyanshk @SsnL @VitalyFedyunin @dzhulgakov
| true
|
2,763,997,898
|
RuntimeError: invalid dtype for bias - should match query's dtype
|
hayatkhan8660-maker
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
I am training the X-CLIP model using a multi-GPU setup (3 GPUs). However, when I start the training process, I encounter the following error:
" **RuntimeError: invalid dtype for bias - should match query's dtype** "
Here is the complete traceback of the error:
UserWarning: torch.utils.checkpoint.checkpoint_sequential: the use_reentrant parameter should be passed explicitly. In version 2.5 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
warnings.warn(
[rank0]: Traceback (most recent call last):
[rank0]: File "/data/Hayat_Research_Data/VideoX/X-CLIP/main.py", line 283, in <module>
[rank0]: main(config)
[rank0]: File "/data/Hayat_Research_Data/VideoX/X-CLIP/main.py", line 104, in main
[rank0]: train_one_epoch(epoch, model, criterion, optimizer, lr_scheduler, train_loader, text_labels, config, mixup_fn)
[rank0]: File "/data/Hayat_Research_Data/VideoX/X-CLIP/main.py", line 149, in train_one_epoch
[rank0]: output = model(images, texts)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1643, in forward
[rank0]: else self._run_ddp_forward(*inputs, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1459, in _run_ddp_forward
[rank0]: return self.module(*inputs, **kwargs) # type: ignore[index]
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/data/Hayat_Research_Data/VideoX/X-CLIP/models/xclip.py", line 135, in forward
[rank0]: text_features = self.cache_text(text)
[rank0]: File "/data/Hayat_Research_Data/VideoX/X-CLIP/models/xclip.py", line 125, in cache_text
[rank0]: self.cache_text_features = self.encode_text(text)
[rank0]: File "/data/Hayat_Research_Data/VideoX/X-CLIP/models/xclip.py", line 97, in encode_text
[rank0]: x = self.transformer(x)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/data/Hayat_Research_Data/VideoX/X-CLIP/clip/model.py", line 87, in forward
[rank0]: return self.resblocks(x)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/container.py", line 250, in forward
[rank0]: input = module(input)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/data/Hayat_Research_Data/VideoX/X-CLIP/clip/model.py", line 75, in forward
[rank0]: x = x + self.attention(self.ln_1(x))
[rank0]: File "/data/Hayat_Research_Data/VideoX/X-CLIP/clip/model.py", line 72, in attention
[rank0]: return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/modules/activation.py", line 1368, in forward
[rank0]: attn_output, attn_output_weights = F.multi_head_attention_forward(
[rank0]: File "/home/hayatullah/anaconda3/envs/VFL/lib/python3.10/site-packages/torch/nn/functional.py", line 6278, in multi_head_attention_forward
[rank0]: attn_output = scaled_dot_product_attention(
[rank0]: RuntimeError: invalid dtype for bias - should match query's dtype
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX 6000 Ada Generation
GPU 1: NVIDIA RTX 6000 Ada Generation
GPU 2: NVIDIA RTX 6000 Ada Generation
Nvidia driver version: 550.107.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5975WX 32-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 7006.6401
CPU min MHz: 1800.0000
BogoMIPS: 7186.58
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.7.0
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.5.1
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.20.1
[pip3] triton==3.0.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.11 py310h5eee18b_0
[conda] mkl_random 1.2.8 py310h1128e8f_0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] numpydoc 1.7.0 py310h06a4308_0
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch 2.5.1 py3.10_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchtriton 3.1.0 py310 pytorch
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
| true
|
2,763,941,806
|
Use _cvtss_sh and _cvtsh_ss for scalar conversion of Half on AVX512
|
CaoE
|
closed
|
[
"module: cpu",
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Using `_cvtss_sh` and `_cvtsh_ss` on AVX512 can get better performance for scalar conversion of Half.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,763,919,490
|
Fix to torch.hub documentation grammar mistakes.
|
AriyanPandian
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 5
|
NONE
|
Proper punctuation and filler words added to the torch.hub documentation to fix the grammar mistakes.

| true
|
2,763,915,936
|
[inductor] [cuda] [fake tensor] `ConvTranspose` behave differently when Input type and weight type are not the same
|
shaoyuyoung
|
open
|
[
"triaged",
"module: type promotion",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**:
trigger condition1: only set the `input tensor` for cuda, but not set the `model.cuda()`
trigger condition2: `padding` param is necessary, otherwise, inductor will also raise the error.
**device**: `cuda` only
**exposed area**: `ConvTranspose1d`, `ConvTranspose2d`, `ConvTranspose3d`
```python
import torch
class Model(torch.nn.Module):
def __init__(self, dim):
super().__init__()
self.conv_t = eval(f"torch.nn.ConvTranspose{dim}d(1, 1, kernel_size=(2,) * {dim}, padding=(1,) * {dim})") # trigger condition
def forward(self, x):
x = self.conv_t(x)
return x
def run_test(dim, mode):
x = torch.randn(*([1] * (dim + 2))).cuda() # trigger condition
inputs = [x]
model = Model(dim)
if mode == "inductor":
model = torch.compile(model)
try:
output = model(*inputs)
print(f"success on {mode}: {output}")
except Exception as e:
print(e)
run_test(1, "eager")
run_test(1, "inductor")
run_test(2, "eager")
run_test(2, "inductor")
run_test(3, "eager")
run_test(3, "inductor")
```
### Error logs
```
Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
success on inductor: tensor([], device='cuda:0', size=(1, 1, 0), grad_fn=<CompiledFunctionBackward>)
Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
success on inductor: tensor([], device='cuda:0', size=(1, 1, 0, 0),
grad_fn=<CompiledFunctionBackward>)
Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
success on inductor: tensor([], device='cuda:0', size=(1, 1, 0, 0, 0),
grad_fn=<CompiledFunctionBackward>)
```
### Versions
PyTorch version: 20241230
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @nairbv @mruberry @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,763,912,055
|
[18/N] Fix extra warnings brought by clang-tidy-17
|
cyyever
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ciflow/s390"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,763,909,869
|
[inductor] [cpu] [graph optimization] output size calculation behaves differently of `ConvTranspose1d`, `ConvTranspose2d`, `ConvTranspose3d` along with `sigmoid`
|
shaoyuyoung
|
closed
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**:
trigger condition1: for `ConvTranspose`, the output size calculation formula is: $O=(I−1)×S+K−2×P$. When the O is zero, eager will raise the error but the inductor will pass the check and return an empty tensor.
trigger condition2: `ConvTranspose` must be along with `sigmoid`. otherwise, inductor will also raise the error
I guess the optimization of `ConvTranspose-sigmoid` on CPP backend loose the check for output size.
**device**: `cpu` only
**exposed area**: `ConvTranspose1d`, `ConvTranspose2d`, `ConvTranspose3d`
```python
import torch
class Model(torch.nn.Module):
def __init__(self, dim):
super().__init__()
self.conv_t = eval(f"torch.nn.ConvTranspose{dim}d(1, 1, kernel_size=(2,) * {dim}, padding=(1,) * {dim})")
def forward(self, x):
x = self.conv_t(x)
x = torch.sigmoid(x) # tigger condition
return x
def run_test(dim, mode):
x = torch.randn(*([1] * (dim + 2)))
inputs = [x]
model = Model(dim)
if mode == "inductor":
model = torch.compile(model)
try:
output = model(*inputs)
print(f"success on {mode}: {output}")
except Exception as e:
print(e)
run_test(1, "eager")
run_test(1, "inductor")
run_test(2, "eager")
run_test(2, "inductor")
run_test(3, "eager")
run_test(3, "inductor")
```
### Error logs
```
Given input size per channel: (1 x 1). Calculated output size per channel: (1 x 0). Output size is too small
success on inductor: tensor([], size=(1, 1, 0), grad_fn=<CompiledFunctionBackward>)
Given input size per channel: (1 x 1). Calculated output size per channel: (0 x 0). Output size is too small
success on inductor: tensor([], size=(1, 1, 0, 0), grad_fn=<CompiledFunctionBackward>)
Given input size per channel: (1 x 1 x 1). Calculated output size per channel: (0 x 0 x 0). Output size is too small
success on inductor: tensor([], size=(1, 1, 0, 0, 0), grad_fn=<CompiledFunctionBackward>)
```
### Versions
PyTorch version: 20241230
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu
| true
|
2,763,883,449
|
[RFC] Add CPP Grouped GEMM Template for Inductor CPU
|
leslie-fang-intel
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
## Motivation
Grouped GEMM is a common pattern in modeling. For example, in the `LlamaMLP` module (https://github.com/huggingface/transformers/blob/d5aebc64653d09660818109f2fac55b5e1031023/src/transformers/models/llama/modeling_llama.py#L187-L188), the `gate_proj` and `up_proj` layers have the same dimensions and share the same activation. After `gate_proj`, an activation function is applied, and the resulting of activation is multiplied by `up_proj` to compute the final output. Fusing the `gate_proj` and `up_proj` layers into a Grouped GEMM improves memory locality when applying activation and multiplication operations. In this RFC, we propose the `approaches` to implemente this Grouped GEMM optimization.
## Approaches
We propose to implement the Grouped GEMM optimization with CPP Template as it's more flexible to support different GEMM number and different epilogue fusions. Here are the proposed design of some key components.
### Pattern Matcher
We introduce `grouped_gemm_pass` to find the pattern of a anchor node (which is the activation shared by Grouped GEMM) and a Group of GEMMs. Replace this pattern with `grouped_gemm_lowering` lowering function and further lowering into GEMM Template.
We also evaluate the `MultiOutputPattern` to enable the pattern matcher and fusion in post-grad fusion passes. Current limitation is the `MultiOutputPattern` requires fixed number of output nodes when define the pattern.
### Inductor Lowering
After lowering into Grouped GEMM Template, most of the flow are same as standard template. The only extension is the Grouped GEMM Template may have multi output nodes. We define the template node with `MultiOutputLayout` and multi output buffers with `MultiOutput` (each is corresponding to a GEMM output).
### Inductor Scheduler Nodes Fusions
In the scheduler node fusion phase,
* Firstly, we fuse the template node (layout of `MultiOutputLayout`) and each GEMM output (`MultiOutput`) into a `FusedSchedulerNode`.
* Then, we further fuse this `FusedSchedulerNode` with it's epilogues, etc `silu`, `mul`, `relu`.
After this phase, we have the `FusedSchedulerNode` with Grouped GEMM and its epilogues. Next, we will do the code generation within CPP Backend into CPP Grouped GEMM Template.
### CPP Grouped GEMM Template
We define a CPP Grouped GEMM Template which extends current CPP GEMM Template implementation with:
* Flexible number of GEMMs
* Each GEMM can have independent or shared activations
* Each GEMM can have a unique weight but same sizes
* Each GEMM can have a unique bias or None
* Each GEMM have its own epilogues
Specifically, we introduce a `CppGroupedGemmTemplate` class that inherits from `CppGemmTemplate`. Key methods, such as `add_choices` and `render`, are overridden to support the aforementioned features.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,763,838,638
|
[AsyncMM] re-enable and adapt to cutlass 3.6.0
|
yifuwang
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: not user facing",
"ci-no-td"
] | 15
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144011
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
[D68734067](https://our.internmc.facebook.com/intern/diff/D68734067)
| true
|
2,763,829,050
|
Native channel shuffle floating point exception
|
abcarlisle
|
closed
|
[
"module: nn",
"triaged",
"open source",
"Merged",
"Stale",
"ciflow/trunk",
"release notes: nn"
] | 11
|
CONTRIBUTOR
|
Fixes #142453
Added TORCH_CHECKS to prevent the user from using the native_channel_shuffle function incorrectly and getting a "Floating point exception (core dumped)"
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,763,822,967
|
[CUDA] Check `size` calculation in `ilpReduce` for `softmax`
|
eqy
|
closed
|
[
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"ciflow/periodic"
] | 17
|
COLLABORATOR
|
For #143644
cc @ptrblck @msaroufim
| true
|
2,763,819,957
|
Brister/always tiled reduction
|
blaine-rister
|
closed
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
Test the CI with tiled reductions always on. This might catch some bugs.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,763,815,395
|
[ROCm] fix torch.layer_norm invalid configuration problem when input is large tensor
|
hongxiayang
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/rocm"
] | 5
|
COLLABORATOR
|
Fixes #136291
This PR is to fix the `invalid configuration argument` problem happened on ROCm when input is a large tensor when calling `torch.layer_norm`.
```
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 2573, in layer_norm
return torch.layer_norm
RuntimeError: HIP error: invalid configuration argument
```
After investigation, I found that the reason why this error happened is: The amd compute language runtime checks whether `gridDim.x * blockDim.x` is greater than `std::numeric_limits<uint32_t>::max()` or not. If yes, it will error out with the "invalid configuration argument" message.
The fix is to split the whole task to several chunks so that each chunk will not trigger the failure condition. This will ensure the correctness and completeness given the current kernel implementation logic of `vectorized_layer_norm_kernel`.
Also added a largeTensor layer_norm unit test `test_layer_norm_large_tensor` with the same shape `[16, 3000, 3000, 16]` as the one used by the pytorch issue #136291 so that the unit test can check the expected output value to ensure correctness.
The future work may include performance optimization of layer_norm and CK layer_norm integration.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @naromero77amd
| true
|
2,763,769,352
|
[inductor] Add missing py312 xfail
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144006
See #144006
```py
__________________________________________ CudaReproTests.test_repeated_masked_load __________________________________________
RuntimeError: First class dim doesn't work with python 3.12
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jansel/conda/envs/pytorch/lib/python3.12/unittest/case.py", line 58, in testPartExecutor
yield
File "/home/jansel/conda/envs/pytorch/lib/python3.12/unittest/case.py", line 634, in run
self._callTestMethod(testMethod)
File "/home/jansel/conda/envs/pytorch/lib/python3.12/unittest/case.py", line 589, in _callTestMethod
if method() is not None:
^^^^^^^^
File "/home/jansel/pytorch/torch/testing/_internal/common_utils.py", line 3108, in wrapper
method(*args, **kwargs)
File "/home/jansel/pytorch/test/inductor/test_cuda_repro.py", line 1678, in test_repeated_masked_load
from functorch.einops import rearrange
File "/home/jansel/pytorch/functorch/einops/__init__.py", line 1, in <module>
from .rearrange import rearrange
File "/home/jansel/pytorch/functorch/einops/rearrange.py", line 7, in <module>
from functorch._C import dim as _C
ImportError: initialization failed
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,763,769,278
|
[tp] propagate src_data_rank kwarg in TP API
|
wanchaol
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144005
* #143883
as titled, this PR propagates the src_data_rank in the TP API, so that
module level APIs could leverage the flexibility to choose
src_data_rank, and avoid the communication if it does not need to
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,763,740,997
|
[inductor] Add types to compile_tasks.py and runtime_utils.py
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144044
* __->__ #144004
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,763,726,644
|
[dynamo][guards][feature] Do not realize LazyVariableTracker on `isinstance` checks
|
anijain2305
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Today, calling `isinstance` on LazyVariableTracker realizes the VT, inserting the guards. In many cases, this guard insertion is accidental and not really required for program correctness.
I am not sure how to do this exhaustively. Maybe we can look at the value of the LazyVariableTracker and support isinstance checks for a few common VTs.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,763,712,468
|
cpp_wrapper: Precompile device-specific header files
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 14
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146452
* #146706
* #146424
* #146109
* #146449
* #144349
* #144293
* __->__ #144002
This saves us about a second per compilation, which is _massive_ for the OpInfo tests. Total OpInfo test runtime is down about 2x from this change alone.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D69185685](https://our.internmc.facebook.com/intern/diff/D69185685)
| true
|
2,763,664,004
|
torch.utils.flop_counter.FlopCounterMode
|
yaoshiang
|
open
|
[
"triaged",
"module: flop counter"
] | 1
|
NONE
|
I found this class because it was referenced in the llama_recipes repo.
My question is whether this definition counts one addition and one multiplication and 2 FLOPs, or, if that's counted as 1 FLOP?
When reporting on GPU hardware, it's common to count the above as two flops.
But when reporting on models, it's common to count the above as one flop... the more accurate term would have been "MAC" - Multiply-Accumulate. But I believe the model literature just started calling it a "flop".
https://github.com/facebookresearch/fvcore/issues/69
This source strongly suggests that the reported value matches HW flops (a single multiply followed by a single add counts as two flops), since at the number of multiplications required for a matmul of two matrices of shapes MxK and KxN is MxKxN, and the number of additions is Mx(K-1)xN, commonly approximated to MxKxN, yielding 2xMxKxN.
https://github.com/pytorch/pytorch/blob/baee623691a38433d10843d5bb9bc0ef6a0feeef/torch/utils/flop_counter.py#L55C1-L64C25
| true
|
2,763,626,385
|
[Submodule] Bump Cutlass to 3.5.1 OSS PR
|
drisspg
|
closed
|
[
"module: cuda",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: float8"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144180
* __->__ #144000
## Summary
Follow up PR to https://github.com/pytorch/pytorch/pull/143515. That PR added a bunch of macro switches to ensure both 3.4 and 3.5.1 built succesfully. This PR actual bumps the cutlass pin to 3.5.1.
I am going to do a stack on top to add an conditional gates for 3.6 hijacking the 3.4 switches. We will leap frog our way to the top :)
cc @ptrblck @msaroufim @eqy @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
2,763,626,321
|
[cutlass-3] Update third-party/cutlass-3 from 3.4 to 3.5.1 (#143515)
|
drisspg
|
closed
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: sparse",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144011
* #144000
* __->__ #143999
Summary:
This commit was generated using `mgt import`.
pristine code for third-party libraries:
third-party/cutlass-3
uuid_71a50b25d7734c28883759737fadc750
This also makes updates to different repositories throughout FB code to roll any updates needed for this new release.
I was not able to get AsyncMM.cu to build (still trying) Yfiu suggested that I just skip it for now
Test Plan:
Have run verious build commands to try and expose errors:
- buck build mode/opt @//mode/inplace -c fbcode.nvcc_arch=h100a //caffe2:libtorch_cuda
- buck2 build --config fbcode.enable_gpu_sections=true --config fbcode.nvcc_arch=h100a --config fbcode.platform010_cuda_version=12 --flagfile fbcode//mode/opt fbcode//ai_codesign/gen_ai/cutlass-kernels:fmha_forward_lib_pipeline_h128
- buck2 build --flagfile fbsource//arvr/mode/platform010/cuda12/opt-stripped fbsource//arvr/libraries/depthlink/clients/depth/nodes/dtof_raw_depth_postprocessing:dtof_raw_depth_postprocessing
- buck2 test 'fbcode//mode/opt' fbcode//accelerators/workloads/models/sticker_gen/RE_tests:test_unet_aotinductor
Reviewed By: lw
Differential Revision: D67291144
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,763,617,445
|
[MPSInductor] Fix multiple kernel generation
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* __->__ #143998
* #143977
* #143973
* #143949
* #143948
At the moment by generating multiple MetalLibraries
`pytest test/inductor/test_torchinductor.py -k _mps` score is 434 failed, 317 passed, 32 skipped
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,763,601,679
|
[dynamo][dicts] Guarding lazily on dict keys
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144342
* #144165
* __->__ #143997
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,763,567,547
|
pytorch with xpu support fails to eval pre trained models
|
farooqkz
|
closed
|
[
"triaged",
"module: xpu"
] | 11
|
NONE
|
### 🐛 Describe the bug
I have installed pytorch as written here(the preview build): https://pytorch.org/docs/main/notes/get_start_xpu.html
Then I'm trying the first example code from the same page:
```
import torch
import torchvision.models as models
model = models.resnet50(weights="ResNet50_Weights.DEFAULT")
model.eval()
data = torch.rand(1, 3, 224, 224)
model = model.to("xpu")
data = data.to("xpu")
with torch.no_grad():
model(data)
print("Execution finished")
```
It fails:
```
ZE_LOADER_DEBUG_TRACE:Using Loader Library Path:
ZE_LOADER_DEBUG_TRACE:Tracing Layer Library Path: libze_tracing_layer.so.1
Traceback (most recent call last):
File "/home/farooqkz/py/intel_xpu_test.py", line 12, in <module>
model(data)
File "/home/farooqkz/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/farooqkz/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/farooqkz/.local/lib/python3.12/site-packages/torchvision/models/resnet.py", line 285, in forward
return self._forward_impl(x)
^^^^^^^^^^^^^^^^^^^^^
File "/home/farooqkz/.local/lib/python3.12/site-packages/torchvision/models/resnet.py", line 268, in _forward_impl
x = self.conv1(x)
^^^^^^^^^^^^^
File "/home/farooqkz/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/farooqkz/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/farooqkz/.local/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/farooqkz/.local/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 549, in _conv_forward
return F.conv2d(
^^^^^^^^^
RuntimeError: could not create an engine
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux trixie/sid (x86_64)
GCC version: (Debian 14.2.0-8) 14.2.0
Clang version: 19.1.5 (1)
CMake version: version 3.31.2
Libc version: glibc-2.40
Python version: 3.12.8 (main, Dec 13 2024, 13:19:48) [GCC 14.2.0] (64-bit runtime)
Python platform: Linux-6.10.9-amd64-x86_64-with-glibc2.40
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 3900X 12-Core Processor
CPU family: 23
Model: 113
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 85%
CPU max MHz: 4919.0000
CPU min MHz: 550.0000
BogoMIPS: 8000.30
[DELETED]
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnxruntime==1.20.0
[pip3] optree==0.13.1
[pip3] pytorch-triton-xpu==3.1.0+91b14bf559
[pip3] torch==2.5.1+xpu
[pip3] torch-geometric==2.6.1
[pip3] torchaudio==2.5.1+xpu
[pip3] torchvision==0.20.1+xpu
[conda] Could not collect
```
Also:
```
farooqkz@darthryzen:~/py$ uname -a
Linux darthryzen 6.10.9-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.10.9-1 (2024-09-08) x86_64 GNU/Linux
```
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,763,535,977
|
Add networkx as bazel dep to fix CI failure
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/periodic"
] | 3
|
CONTRIBUTOR
|
Add networkx as a dependency for test_bazel
Example failure: https://github.com/pytorch/pytorch/actions/runs/12551752021/job/34996706301
```
INFO: From Testing //:test_bazel:
==================== Test output for //:test_bazel:
Traceback (most recent call last):
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/test/_test_bazel.py", line 33, in <module>
test_simple_compile_eager()
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/test/_test_bazel.py", line 27, in test_simple_compile_eager
opt_foo1 = torch.compile(foo, backend="eager")
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/torch/__init__.py", line 2533, in compile
backend = _TorchCompileWrapper(backend, mode, options, dynamic)
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/torch/__init__.py", line 2342, in __init__
self.compiler_fn = lookup_backend(backend)
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/torch/_dynamo/backends/registry.py", line 66, in lookup_backend
_lazy_import()
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/torch/_dynamo/backends/registry.py", line 102, in _lazy_import
import_submodule(backends)
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/torch/_dynamo/utils.py", line 2797, in import_submodule
importlib.import_module(f"{mod.__name__}.{filename[:-3]}")
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/execroot/pytorch/external/python3_10_x86_64-unknown-linux-gnu/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/torch/_dynamo/backends/common.py", line 12, in <module>
from torch._functorch.aot_autograd import (
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/torch/_functorch/aot_autograd.py", line 147, in <module>
from .partitioners import default_partition
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/torch/_functorch/partitioners.py", line 31, in <module>
from ._activation_checkpointing.graph_info_provider import GraphInfoProvider
File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/sandbox/processwrapper-sandbox/6504/execroot/pytorch/bazel-out/k8-fastbuild/bin/test_bazel.runfiles/pytorch/torch/_functorch/_activation_checkpointing/graph_info_provider.py", line 3, in <module>
import networkx as nx
ModuleNotFoundError: No module named 'networkx'
```
No periodic runs on this PR or its main branch commit, but I'm pretty sure its started on https://togithub.com/pytorch/pytorch/pull/143539
| true
|
2,763,495,538
|
DISABLED test_setting_meta_device_model_broadcasting_and_memory (__main__.TestStateDict)
|
clee2000
|
closed
|
[
"oncall: distributed",
"module: rocm",
"triaged",
"skipped"
] | 3
|
CONTRIBUTOR
|
Platforms: rocm
Started probably at https://github.com/pytorch/pytorch/pull/142845
https://hud.pytorch.org/hud/pytorch/pytorch/9d026000de01bbd4d5c97bdca88cc6228507617a/3?per_page=100&name_filter=distributed&mergeLF=true
https://github.com/pytorch/pytorch/actions/runs/12409302699/job/34672198799
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22distributed%2Fcheckpoint%2Ftest_state_dict.py%3A%3ATestStateDict%3A%3Atest_setting_meta_device_model_broadcasting_and_memory%22%5D)).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,763,495,103
|
Return attention weights in scaled_dot_product_attention
|
mseeger
|
closed
|
[
"triaged",
"module: sdpa"
] | 0
|
NONE
|
### 🚀 The feature, motivation and pitch
I'd like to reopen the request #119811, but for a special case, namely generative inference, where the Q tensor is very small (just a single token). The request is to return the attention weights along with the normal MHA output.
Why is this important? In order to implement KV cache eviction strategies, like heavy hitter oracle, these weights are needed.
The suggestion in the issue, to pass an identity as V, is not useful, because this tensor would be huge, defying the whole purpose of efficient MHA in this case. It would be square the size of the KV cache at least. And the application is not just for some visualization, it must run efficiently inside generation.
I digged a bit into your sources. It seems that some functions called by `scaled_dot_product_attention` return attn_weights as second argument, for example `_scaled_dot_product_attention_math` in `aten/src/ATen/native/transformers/attention.cpp`. But most others do not. They return tuples with `(output, logsumexp, ...)`, and `scaled_dot_product_attention` returns `output`. I don't really know what `logsumexp` is, but it has the wrong shape, scales with size of Q.
Any hint for how to get this done would be appreciated, except for the "trick" to pass the identity as V.
### Alternatives
_No response_
### Additional context
_No response_
| true
|
2,763,490,748
|
[CI] Multigpu 1 -> 2 shards
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/periodic"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
It's been timing out https://github.com/pytorch/pytorch/actions/runs/12544191739/job/34977636276
They're still somewhat uneven but they're both under the limit now. It would probably be better to use run_test.py's sharding to do this, maybe in another PR
| true
|
2,763,486,125
|
Fix flaky "Upload test stats" job
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143991
Test stat uploading was intermittently failing due to certain XML strings being opportunistically converted to numbers, when string output was expected. This PR makes the conversion behavior optional, which should fix the stat uploads.
| true
|
2,763,471,382
|
[AOTI] don't codegen autotune_at_compile_time for non-Triton kernels
|
ColinPeppler
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
`autotune_at_compile_time` is a separate codegen file specifically for autotuning Triton kernels. We can skip it for non-Triton kernels (like CUTLASS).
This test (test_aoti_workspace_ptr) checks that `workspace_0.data_ptr()` is codegen-ed correctly in AOTI.
```
// in AOTI codegen
kernels.cuda_fused_0(
(const half*)arg0_1.data_ptr(), (const half*)arg1_1.data_ptr(), (half*)buf0.data_ptr(),
(int)200, (int)5216, (int)10432, (int)10432, (int)5216, (int)0, (int)5216,
(size_t*)nullptr, (uint8_t*)workspace_0.data_ptr(), stream);
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143990
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang @aakhundov
| true
|
2,763,470,826
|
[FSDP] Add workaround to fix `buffer_dtype` without root parameters
|
awgu
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143989
Fixes https://github.com/pytorch/pytorch/issues/143900
cc @H-Huang @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,763,437,747
|
Add a knob to control how many blocks are used by persistent matmul/attn kernels
|
lw
|
closed
|
[
"module: cuda",
"triaged",
"module: cublas",
"module: linear algebra"
] | 3
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
We train a transformer-style model using FSDP, and we have a very good overlap between the matmul kernels (from cuBLAS) and the NCCL operation in the background. However, when profiling, we have observed that the **matmuls take 2x as long** to complete when they are overlapped with a NCCL kernel!
We believe this is easily explained: we're running on H100 GPUs and, upon inspection, all the matmuls look like they are using "persistent" kernels. That is, they launch as many CUDA blocks as there are SMs on the GPU (i.e., 132) and each of these blocks will process several tiles in a row. What we're observing is thus a form of "wave quantization" where, due to NCCL occupying some SMs, not all blocks of the matmuls can be scheduled at once, thus breaking them into two waves, which thus take twice as long to complete.
Since NCCL only occupies ~10% of the SMs, it would be much more efficient if the matmuls tried to launch a number of blocks that corresponds to ~90% of the SMs. This would allow the two kernels to run simultaneously in a single wave, with the matmuls only being ~10% slower, not ~50%!
For that, however, we need PyTorch to add a new knob allowing us to control such a value, and to forward that knob when launching its cuBLAS kernels (and others).
### Alternatives
None. We couldn't find any environment variable provided by cuBLAS that allows us to override the number of blocks launched.
### Additional context
With longer NCCL kernels, matmuls take a long time:
<img width="1555" alt="Screenshot 2024-12-30 at 17 29 23" src="https://github.com/user-attachments/assets/d91d192e-16e9-4108-9d8e-5cb7caef80f6" />
With shorter NCCL kernels, the non-overlapped matmuls now take less time:
<img width="1439" alt="Screenshot 2024-12-30 at 17 29 42" src="https://github.com/user-attachments/assets/6e1fff67-b1a8-4b3b-a582-6648fc8b00bf" />
cc @ptrblck @msaroufim @eqy @csarofeen @xwang233 @jianyuh @nikitaved @pearu @mruberry @walterddr @Lezcano
| true
|
2,763,378,477
|
Enable several readability checks
|
cyyever
|
open
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"open source",
"release notes: cpp",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 5
|
COLLABORATOR
|
They are about add const to members and parameters and other fixes.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan @yf225
| true
|
2,763,362,589
|
[ROCm] Fix for ld failed to convert GOTPCREL relocation in PyTorch build
|
hongxiayang
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 14
|
COLLABORATOR
|
I experienced an error while doing a DEBUG build of pytorch on rocm:
```
additional relocation overflows omitted from the output
/usr/bin/ld: failed to convert GOTPCREL relocation; relink with --no-relax
```
Based on discussions on similar issue #138427, I fixed it after adding the `--offload-compress` to the HIP_HIPCC_FLAGS to successfully build DEBUG mode on my node.
Further updated the PR to enable the flag for non-DEBUG builds as well due to the size reduction.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @naromero77amd
| true
|
2,763,348,025
|
[DTensor] Allow multiple dimensions to be sharded together (as if flattened)
|
lw
|
open
|
[
"oncall: distributed",
"module: dtensor"
] | 6
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
PyTorch's utilities for sequence parallelism seem to suppose that tensors will have two separate dimensions for batch (dim 0) and sequence (dim 1), and shard only along dim 1. However, if batch > 1, this means that each rank's shard will be non-contiguous. This is a problem because NCCL wants contiguous storages, hence we end up with extra kernels for (un-)compacting the data. You can clearly see them in this profiler trace obtained from such a job:

The simplest solution here, which wouldn't require changing user code, is for these tensors to be shared across the _joint_ 0 and 1 dimensions, i.e., as if those two dimensions were flattened. In other terms, we'd need DTensor to support a placement type that looks like `Shard(dims=[0, 1])`.
### Alternatives
The only alternative I see is for users to flatten those tensors manually in their own model code. This is not too hard, but I claim it's undesirable, since it _worsens_ the user code (makes it more opaque) our of a concern for efficiency, whereas the aim of torch.compile and DTensors should be to deal with efficiency without the user's intervention.
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,763,312,863
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 9
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [28cfac20ec662abdb0ac98faf122450013e8f520](https://github.com/intel/torch-xpu-ops/commit/28cfac20ec662abdb0ac98faf122450013e8f520), includes:
- Disable batch_norm vectorization path to fix accuracy issues.
- Fix the LSRM/RNN implementation error.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,763,266,524
|
[CD] Remove redundant triton dependency for xpu wheels
|
pytorchbot
|
closed
|
[
"open source",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 1
|
COLLABORATOR
|
Due to XPU CD wheels enabled pypi dependencies by https://github.com/pytorch/pytorch/pull/141135, so the PYTORCH_EXTRA_INSTALL_REQUIREMENTS has value for XPU CD wheel build.
Works for https://github.com/pytorch/pytorch/issues/139722 and https://github.com/pytorch/pytorch/issues/114850
Fixes #143838
| true
|
2,763,069,223
|
Enable readability-redundant-declaration
|
cyyever
|
closed
|
[
"oncall: distributed",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"module: dynamo",
"ciflow/inductor"
] | 9
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,763,033,498
|
Fix typo: change 'recieve' into 'receive'
|
loricR
|
closed
|
[
"open source",
"Stale",
"release notes: releng",
"module: dynamo"
] | 3
|
NONE
|
Fix typo: change all occurrences of 'recieve' to 'receive'.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,762,923,074
|
Fixed bug in FindMKL.cmake
|
mmoelle1
|
closed
|
[
"module: build",
"triaged",
"open source",
"Stale"
] | 4
|
NONE
|
This PR fixes the CMake error
```
CMake Error at cmake/Modules/FindMKL.cmake:195 (IF):
if given arguments:
"UNIX" "AND"
Unknown arguments specified
Call Stack (most recent call first):
cmake/Modules/FindMKL.cmake:353 (GET_MKL_LIB_NAMES)
cmake/Modules/FindBLAS.cmake:99 (FIND_PACKAGE)
CMakeLists.txt:118 (find_package)
```
cc @malfet @seemethere
| true
|
2,762,835,405
|
[ci] Add riscv opt-int build
|
zhangfeiv0
|
open
|
[
"module: build",
"triaged",
"open source",
"release notes: releng",
"module: risc-v"
] | 28
|
CONTRIBUTOR
|
Hi, @malfet
Based on the previous discussion:
[RISCV CI support · Issue #141550 · pytorch/pytorch](https://github.com/pytorch/pytorch/issues/141550)
I have cross-compiled PyTorch for the RISC-V architecture on x86_64 Ubuntu 24.04 and created a new PR for it. Could you please help review it?
cc @malfet @seemethere
| true
|
2,762,764,855
|
torch.compile() returns a different value than interpreted (NaN vs 1)
|
dcci
|
open
|
[
"module: cpu",
"module: NaNs and Infs",
"oncall: pt2",
"oncall: cpu inductor"
] | 5
|
MEMBER
|
### 🐛 Describe the bug
Snippet:
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = torch.rsqrt(x)
x = torch.angle(x)
x = torch.atan(x)
x = torch.positive(x)
x = torch.sin(x)
return x
func = Model().to('cpu')
x = torch.tensor([0.7886758446693420, -1.1796921491622925, 0.5152440071105957,
1.4106913805007935, 1.3812966346740723, 0.5720621347427368,
1.7286888360977173, -2.2948377132415771, -0.3201593160629272,
1.3686311244964600])
pata = func(x.clone())
print(pata)
func1 = torch.compile(func, fullgraph=True)
tino = func1(x.clone())
print(tino)
print(torch.allclose(pata, tino, equal_nan=True))
print(torch.__version__)
```
Output:
```
tensor([0., nan, 0., 0., 0., 0., 0., nan, nan, 0.])
tensor([0., 1., 0., 0., 0., 0., 0., 1., 1., 0.])
False
2.6.0a0+gitd2f7694
```
### Versions
PyTorch version: 2.6.0a0+gitd2f7694
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: 18.1.8 (CentOS 18.1.8-3.el9)
CMake version: version 3.31.2
Libc version: glibc-2.34
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 88
On-line CPU(s) list: 0-87
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 88
Socket(s): 1
Stepping: 11
BogoMIPS: 3591.74
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni p
clmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept v
pid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 arat vnmi umip pku ospke avx512_vnni md_clear flush_l1d ar
ch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.8 MiB (88 instances)
L1i cache: 2.8 MiB (88 instances)
L2 cache: 352 MiB (88 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-87
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gitd2f7694
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gitd2f7694 dev_0 <develop>
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @chauhang @penguinwu
| true
|
2,762,727,380
|
[MPSInductor] Implement minimum and maximum ops
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #143998
* __->__ #143977
* #143973
* #143949
* #143948
By calling `metal::min` and `metal::max` respectively with argument
typecast to a common type to avoid ambiguous calls errors
TODO: Implement NaN propagation for both eager and compile, see https://github.com/pytorch/pytorch/issues/143976
`pytest test/inductor/test_torchinductor.py -k _mps` score is 460 failed, 291 passed, 32 skipped
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,762,725,089
|
`torch.maximum` and `torch.minimum` do not propagate nans on MPS
|
malfet
|
closed
|
[
"triaged",
"module: NaNs and Infs",
"module: mps"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Consider running the following
```python
import torch
x = torch.rand(32, device="mps")
y = torch.rand(32, device="mps")
x[3] = torch.nan
y[5] = torch.nan
print(x.isnan().any().item(), torch.minimum(x, y).isnan().any().item())
```
It will print `True False`, but should have `True True`
Discovered while enabling test_max_min inductor test
### Versions
2.5.1, nightly
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,762,692,608
|
[Inductor UT] Generalize newly introduced device-bias hard code in
|
etaf
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ci-no-td"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143975
test_pattern_matcher.py
Fix #143974
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,762,691,348
|
[Break XPU] Hard code “cuda” in GPU test case cause failure on XPU.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
The PR #139321 introduce a new test case `torch/_inductor/pattern_matcher.py:test_duplicate_search` which is not specified requires_cuda but hard code device type `cuda`, cause it fails on XPU.
https://github.com/pytorch/pytorch/blob/2ed4d65af0a1993c0df7b081f4088d0f3614283e/test/inductor/test_pattern_matcher.py#L203-L207
### Versions
PyTorch version: 2.6.0a0+git861e26d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,762,687,827
|
[MPSInductor] Fix index generation for transpose
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #143998
* #143977
* __->__ #143973
* #143949
* #143948
Alas, PythonPrinter would not work here, not would CppPrinter, so start building MetalPrinter.
`pytest test/inductor/test_torchinductor.py -k _mps` score is 474 failed, 277 passed, 32 skipped
Before this change:
`pytest test/inductor/test_torchinductor.py -k _mps` reported 506 failed, 245 passed, 32 skipped
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,762,650,199
|
[DCP] dcp.load leads to param type mismatch.
|
nanlliu
|
closed
|
[
"oncall: distributed",
"oncall: distributed checkpointing"
] | 4
|
NONE
|
### 🐛 Describe the bug
I tried to load a dcp saved checkpoint on a single GPU.
However, when I loaded optimizer state using `set_optimizer_state_dict` and it led to following error:
I'd expect `BytesStorageMetadata` is the same as`torch.distributed.checkpoint.metadata.BytesStorageMetadata`?
```
CheckpointException: CheckpointException ranks:dict_keys([0])
Traceback (most recent call last): (RANK 0)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/utils.py", line 164, in reduce_scatter
local_data = map_fun()
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/logger.py", line 83, in wrapper
result = func(*args, **kwargs)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/state_dict_loader.py", line 211, in local_step
local_plan = planner.create_local_plan()
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/default_planner.py", line 233, in create_local_plan
return create_default_local_load_plan(
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/default_planner.py", line 365, in create_default_local_load_plan
requests += _create_read_items(fqn, md, obj)
File "/mnt/filestore/users/nan/.venv/lib/python3.10/site-packages/torch/distributed/checkpoint/planner_helpers.py", line 269, in _create_read_items
raise ValueError(
ValueError: Invalid checkpoint metadata for opt_state.param_groups.0.weight_decay, expected BytesStorageMetadata but found <class 'torch.distributed.checkpoint.metadata.BytesStorageMetadata'>
...
[rank0]: ValueError: Invalid checkpoint metadata for opt_state.param_groups.0.lr, expected BytesStorageMetadata but found <class 'torch.distributed.checkpoint.metadata.TensorStorageMetadata'>
```
### Versions
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] open_clip_torch==2.28.0
[pip3] optree==0.13.1
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] torch==2.5.0
[pip3] torch_scatter==2.1.2.dev4
[pip3] torch-tb-profiler==0.4.3
[pip3] torchao==0.6.1
[pip3] torchaudio==2.5.0
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.6.0
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @pradeepfn
| true
|
2,762,648,306
|
[ROCm] enable CK backend for bf16/fp16 on gfx11
|
jfactory07
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 30
|
CONTRIBUTOR
|
this change enables enable CK backend for fp16 on Gfx11
@jeffdaily
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,762,601,723
|
[AOTI] Not use AOTI_TORCH_CHECK in non AOTI mode.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143970
Fix #143967
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,762,600,557
|
[distributed] parallelize_module error with `SequenceParallel`
|
gameofdimension
|
open
|
[
"oncall: distributed",
"triaged"
] | 5
|
NONE
|
### 🐛 Describe the bug
repro code:
run with `torchrun --nproc-per-node=8 --local-ranks-filter=1 -m bad_match`
```python
import os
from typing import Optional, Tuple
import torch
import torch.distributed as dist
from torch import nn
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.tensor.parallel import (SequenceParallel,
parallelize_module)
def cleanup():
dist.destroy_process_group()
def init_distributed():
# Initializes the distributed backend
# which will take care of sychronizing nodes/GPUs
dist_url = "env://" # default
# only works with torch.distributed.launch // torch.run
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
local_rank = int(os.environ["LOCAL_RANK"])
# this will make all .cuda() calls work properly
torch.cuda.set_device(local_rank)
dist.init_process_group(
backend="nccl", init_method=dist_url, world_size=world_size, rank=rank,
device_id=torch.device(f"cuda:{torch.cuda.current_device()}"),
)
# synchronizes all the threads to reach this point before moving on
dist.barrier()
return world_size, rank, local_rank
class AdaLayerNormZeroSingle(nn.Module):
def __init__(self, embedding_dim: int, norm_type="layer_norm", bias=True):
super().__init__()
self.linear = nn.Linear(embedding_dim, 3 * embedding_dim, bias=bias)
self.norm = nn.LayerNorm(embedding_dim, elementwise_affine=False, eps=1e-6)
def forward(
self,
x: torch.Tensor,
emb: Optional[torch.Tensor] = None,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
pass
class FluxSingleTransformerBlock(nn.Module):
def __init__(self, dim, mlp_ratio=4.0):
super().__init__()
self.mlp_hidden_dim = int(dim * mlp_ratio)
self.norm = AdaLayerNormZeroSingle(dim)
def forward(
self,
hidden_states: torch.FloatTensor,
):
pass
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.block = FluxSingleTransformerBlock(
dim=3072,
)
def forward(
self,
hidden_states: torch.FloatTensor,
):
return self.block(hidden_states)
def main():
init_distributed()
mesh = init_device_mesh(
"cuda", (dist.get_world_size(), ))
model = Model()
parallelize_module(
model.block, mesh, parallelize_plan={
"norm.norm": SequenceParallel(
use_local_output=True,
),
}
)
cleanup()
if __name__ == "__main__":
main()
```
error
```
[rank1]: Traceback (most recent call last):
[rank1]: File "<frozen runpy>", line 198, in _run_module_as_main
[rank1]: File "<frozen runpy>", line 88, in _run_code
[rank1]: File "/home/pagoda/t2i/bad_match.py", line 70, in <module>
[rank1]: main()
[rank1]: File "/home/pagoda/t2i/bad_match.py", line 59, in main
[rank1]: parallelize_module(
[rank1]: File "/home/pagoda/venv/lib/python3.11/site-packages/torch/distributed/tensor/parallel/api.py", line 112, in parallelize_module
[rank1]: parallelize_module(submodule, device_mesh, parallelize_style)
[rank1]: File "/home/pagoda/venv/lib/python3.11/site-packages/torch/distributed/tensor/parallel/api.py", line 85, in parallelize_module
[rank1]: return parallelize_plan._apply(module, device_mesh)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/pagoda/venv/lib/python3.11/site-packages/torch/distributed/tensor/parallel/style.py", line 368, in _apply
[rank1]: return distribute_module(
[rank1]: ^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/pagoda/venv/lib/python3.11/site-packages/torch/distributed/tensor/_api.py", line 852, in distribute_module
[rank1]: partition_fn(name, submod, device_mesh)
[rank1]: File "/home/pagoda/venv/lib/python3.11/site-packages/torch/distributed/tensor/parallel/style.py", line 341, in _replicate_module_fn
[rank1]: module.register_parameter(p_name, replicated_param)
[rank1]: File "/home/pagoda/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 604, in register_parameter
[rank1]: raise KeyError('parameter name can\'t contain "."')
[rank1]: KeyError: 'parameter name can\'t contain "."'
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.9 (main, May 14 2024, 09:36:59) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.0-136.36.0.112.4.oe2203sp1.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8352Y CPU @ 2.20GHz
Stepping: 6
CPU MHz: 2800.001
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 3 MiB
L1i cache: 2 MiB
L2 cache: 80 MiB
L3 cache: 96 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,762,599,514
|
change import relative paths due to internal build failures
|
wdvr
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 4
|
CONTRIBUTOR
|
Internal builds failing due to #143355, changing imports to be relative, similar to other imports
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,762,590,304
|
[AOTI] run without AOTI but get "unresolved external symbol aoti_torch_check referenced in function kernel" on Windows.
|
etaf
|
closed
|
[
"module: windows",
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
Hi, I ran into an AOTI problem while running CPU Inductor on Windows, but I'm not using AOTI, the error is: “unresolved external symbol aoti_torch_check referenced in function kernel”.
The root cause is in cpp kernel codegen `CppKernel::assert_function`, we always use AOTI_TORCH_CHECK for `assert_function` without checking if it's AOT mode. And the `AOTI_TORCH_CHECK` is decorated with `AOTI_TORCH_EXPORT` witch only export on Linux, not Windows.
https://github.com/pytorch/pytorch/blob/2ed4d65af0a1993c0df7b081f4088d0f3614283e/torch/_inductor/codegen/cpp.py#L2276-L2278
To reproduce using existing UT: `python test\inductor\test_torchinductor.py CpuTests.test_device_assert_cpu`
Error log:
```
_run_compile_cmd(cmd_line, cwd)
File "c:\xinanlin\pytorch\torch\_inductor\cpp_builder.py", line 341, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CppCompileError: C++ compile error
Command:
cl /I C:/Users/sdp/miniforge3/envs/xinanlin/Include /I c:/xinanlin/pytorch/torch/include /I c:/xinanlin/pytorch/torch/include/torch/csrc/api/include /I c:/xinanlin/pytorch/torch/include/TH /I c:/xinanlin/pytorch/torch/include/THC /D TORCH_INDUCTOR_CPP_WRAPPER /D STANDALONE_TORCH_HEADER /D C10_USING_CUSTOM_GENERATED_MACROS /D CPU_CAPABILITY_AVX2 /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/sdp/AppData/Local/Temp/tmpjvm7ohvw/s4/cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.cpp /arch:AVX2 /LD /FeC:/Users/sdp/AppData/Local/Temp/tmpjvm7ohvw/s4/cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.pyd /link /LIBPATH:C:/Users/sdp/miniforge3/envs/xinanlin/libs /LIBPATH:c:/xinanlin/pytorch/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib
Output:
Microsoft (R) C/C++ Optimizing Compiler Version 19.42.34435 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
cl : Command line warning D9025 : overriding '/openmp' with '/openmp:experimental'
cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.cpp
c:/xinanlin/pytorch/torch/include\ATen/cpu/vec/vec_half.h(16): warning C4556: value of intrinsic immediate argument '8' is out of range '0 - 7'
c:/xinanlin/pytorch/torch/include\ATen/cpu/vec/vec256/vec256_bfloat16.h(119): warning C4556: value of intrinsic immediate argument '8' is out of range '0 - 7'
c:/xinanlin/pytorch/torch/include\ATen/cpu/vec/vec256/vec256_bfloat16.h(124): warning C4556: value of intrinsic immediate argument '8' is out of range '0 - 7'
c:/xinanlin/pytorch/torch/include\ATen/cpu/vec/vec256/vec256_bfloat16.h(126): warning C4556: value of intrinsic immediate argument '8' is out of range '0 - 7'
Microsoft (R) Incremental Linker Version 14.42.34435.0
Copyright (C) Microsoft Corporation. All rights reserved.
/dll
/implib:C:/Users/sdp/AppData/Local/Temp/tmpjvm7ohvw/s4/cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.lib
/out:C:/Users/sdp/AppData/Local/Temp/tmpjvm7ohvw/s4/cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.pyd
/LIBPATH:C:/Users/sdp/miniforge3/envs/xinanlin/libs
/LIBPATH:c:/xinanlin/pytorch/torch/lib
torch.lib
torch_cpu.lib
torch_python.lib
sleef.lib
cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.obj
Creating library C:/Users/sdp/AppData/Local/Temp/tmpjvm7ohvw/s4/cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.lib and object C:/Users/sdp/AppData/Local/Temp/tmpjvm7ohvw/s4/cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.exp
cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.obj : error LNK2019: unresolved external symbol aoti_torch_check referenced in function kernel
C:\Users\sdp\AppData\Local\Temp\tmpjvm7ohvw\s4\cs4bvgyshmfzu4274e5ceyxca6e2gjdbimc4427ujoquzyxyt6sl.pyd : fatal error LNK1120: 1 unresolved externals
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test\inductor\test_torchinductor.py CpuTests.test_device_assert_cpu
```
### Versions
PyTorch version: 2.6.0a0+git2ed4d6
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78
| true
|
2,762,586,049
|
[WIP] Enable MPS inductor testing
|
malfet
|
closed
|
[
"Stale",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"keep-going"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143966
| true
|
2,762,583,066
|
test case got killed file_based_local_timer_test.py test_get_timer_recursive
|
garfield1997
|
open
|
[
"module: tests",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
test case get killed when running test file_based_local_timer_test.py::FileTimerTest::test_get_timer_recursive
output
```
python file_based_local_timer_test.py -k 'test_get_timer_recursive'
Killed
```
### Versions
main
cc @mruberry @ZainRizvi
| true
|
2,762,546,651
|
[Submodule] Bump flatbuffers to v24.12.23
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,762,513,145
|
Enable more readability-redundant checks
|
cyyever
|
closed
|
[
"oncall: distributed",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"module: dynamo",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
They are helpful to simplifying code.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,762,483,143
|
Nightly pytorch wheel for prerelease version 2.6 is build with C++11 ABI on, at least for CPU
|
jeffhataws
|
closed
|
[
"module: binaries",
"module: abi",
"triaged"
] | 12
|
NONE
|
### 🐛 Describe the bug
When we install nightly pytorch 2.6 and test it with torch-xla 2.6, it appears the CPU version is build with C++11 ABI on, causing error with torch-xla.
```
(aws_neuron_venv) [ec2-user@ip-10-1-17-115 ~]$ pip install --force-reinstall --no-deps torch --index-url https://download.pytorch.org/whl/nightly/cpu
Looking in indexes: https://download.pytorch.org/whl/nightly/cpu
Collecting torch
Using cached https://download.pytorch.org/whl/nightly/cpu/torch-2.6.0.dev20241229%2Bcpu-cp39-cp39-manylinux_2_28_x86_64.whl (175.1 MB)
Installing collected packages: torch
Attempting uninstall: torch
Found existing installation: torch 2.6.0.dev20241229+cpu
Uninstalling torch-2.6.0.dev20241229+cpu:
Successfully uninstalled torch-2.6.0.dev20241229+cpu
Successfully installed torch-2.6.0.dev20241229+cpu
```
```
(aws_neuron_venv) [ec2-user@ip-10-1-17-115 ~]$ python3 -c "import torch_xla"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/ec2-user/aws_neuron_venv/lib64/python3.9/site-packages/torch_xla/__init__.py", line 20, in <module>
import _XLAC
ImportError: /home/ec2-user/aws_neuron_venv/lib64/python3.9/site-packages/_XLAC.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZN5torch4lazy13MetricFnValueEd
```
When checking using torch.compiled_with_cxx11_abi(), we see that it returns True:
```
(aws_neuron_venv) [ec2-user@ip-10-1-17-115 ~]$ python3 -c "import torch; print(torch.compiled_with_cxx11_abi())"
True
```
The default should be with C++11 ABI off, because there are C++11 ABI wheels, for example [torch-2.6.0.dev20241229+cpu.cxx11.abi-cp310-cp310-linux_x86_64.whl](https://download.pytorch.org/whl/nightly/cpu-cxx11-abi/torch-2.6.0.dev20241229%2Bcpu.cxx11.abi-cp310-cp310-linux_x86_64.whl)
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241229+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2023.6.20241212 (x86_64)
GCC version: (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.9.20 (main, Dec 11 2024, 00:00:00) [GCC 11.4.1 20230605 (Red Hat 11.4.1-2)] (64-bit runtime)
Python platform: Linux-6.1.119-129.201.amzn2023.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 6
BogoMIPS: 5799.85
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd ida arat avx512vbmi pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear flush_l1d arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 5 MiB (4 instances)
L3 cache: 54 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.6.0.dev20241229+cpu
[pip3] torch-neuronx==2.6.0.2.4.0
[pip3] torch-xla==2.6.0+git7dd2697
[pip3] torchvision==0.22.0.dev20241229+cpu
[conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar @atalman
| true
|
2,762,456,333
|
[poc][not-ready-for-review] visualize dynamic shapes shape env mutations over time
|
bobrenjc93
|
closed
|
[
"Stale",
"ciflow/trunk",
"release notes: fx",
"fx",
"ciflow/inductor"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143961
logs to /tmp/ds.txt and you can copy and paste it into https://stateviz.vercel.app/
Differential Revision: [D69164764](https://our.internmc.facebook.com/intern/diff/D69164764)
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,762,346,118
|
torch.dist is more numerical unstable on scalar input after torch.compile
|
meetmul
|
closed
|
[
"module: numerical-stability",
"triaged",
"oncall: pt2"
] | 0
|
NONE
|
### 🐛 Describe the bug
It seems that torch.dist is more numerical unstable after torch.compile. Interestingly, this issue only occurs when `a` and `b` are scalar. If `a` or `b` contains more than one element, torch.dist has consistent result (and consistent numerical stability) after torch.compile.
To reproduce
```python
import torch
import numpy as np
torch.manual_seed(5)
dist = torch.dist
compiled_dist = torch.compile(torch.dist)
incon1 = []
incon2 = []
for i in range(100):
a = torch.rand(1).float()
b = torch.rand(1).float()
high_a = a.double()
high_b = b.double()
ref = compiled_dist(high_a, high_b, -41)
incon1.append(torch.abs(dist(a, b, -41) - ref))
incon2.append(torch.abs(compiled_dist(a, b, -41) - ref))
print("Average error before compile: ", np.average(incon1))
print("Average error after compile: ", np.average(incon2))
```
Output:
```
Average error before compile: 1.7824283715661694e-18
Average error after compile: 0.009653514623641966
```
I think torch.compile directly uses `aten.ops.dist`?
### Versions
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu
| true
|
2,762,321,912
|
Defaults to C++20 in CMake torch targets
|
cyyever
|
open
|
[
"module: cpp",
"module: cpu",
"open source",
"NNC",
"release notes: build",
"topic: not user facing",
"ciflow/periodic",
"topic: build",
"ciflow/s390"
] | 2
|
COLLABORATOR
|
Some initial attempts.
cc @jbschlosser @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @EikanWang
| true
|
2,762,304,100
|
using more descriptive alt text for accessibility
|
netra212
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3
|
NONE
|
## Changes Made:
### Updated alt text for images to provide more descriptive and contextually relevant descriptions.
Example: Changed "Tensor illustration" to "Illustration of a Tensor operation in PyTorch."
This will ensure the alt text aligns with accessibility best practices & enhancing clarity and inclusivity.
| true
|
2,762,281,852
|
Enable readability-qualified-auto in clang-tidy
|
cyyever
|
closed
|
[
"oncall: distributed",
"open source",
"release notes: cpp",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 1
|
COLLABORATOR
|
Auto * indicates that the type is pointer. Another benefit is const when possible.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan @yf225
| true
|
2,762,194,100
|
Add ability to skip compute capability checks for Triton
|
sasha0552
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: improvements",
"module: inductor",
"release notes: inductor"
] | 6
|
NONE
|
This PR adds an environment variable `TORCH_TRITON_SKIP_CC_CHECKS` that allows to skip CUDA compute compatibility checks, which is useful if the user is using [a custom Triton build](https://github.com/sasha0552/pascal-pkgs-ci) that does support older hardware.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,762,112,534
|
Issue with aten::_sparse_coo_tensor_with_dims_and_tensors on Apple Silicon GPU (MPS) Backend when Using Whisper Model
|
Proton1917
|
open
|
[
"module: sparse",
"feature",
"triaged",
"module: mps"
] | 0
|
NONE
|
### 🐛 Describe the bug
## Description
Error occurs while running Whisper model on MPS. The operation `aten::_sparse_coo_tensor_with_dims_and_tensors` fails to fallback to CPU with `PYTORCH_ENABLE_MPS_FALLBACK=1`.
### Minimal Code to Reproduce the Issue
```python
import whisper
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = whisper.load_model("large-v3", device=device)
result = model.transcribe("/path/to/audio/file.m4a")
```
### Versions
## Description
Error occurs while running Whisper model on MPS. The operation `aten::_sparse_coo_tensor_with_dims_and_tensors` fails to fallback to CPU with `PYTORCH_ENABLE_MPS_FALLBACK=1`.
### Minimal Code to Reproduce the Issue
```python
import whisper
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = whisper.load_model("large-v3", device=device)
result = model.transcribe("/path/to/audio/file.m4a")
```
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,762,098,764
|
No ONNX function found for <OpOverload(op='quantized_decomposed.dequantize_per_channel', overload='default')>
|
ruixupu
|
open
|
[
"module: onnx",
"triaged",
"OSS contribution wanted"
] | 3
|
NONE
|
### 🐛 Describe the bug
We tried to leverage per_channel quantization in QAT and exported the trained model in onnx format.
```py
model = dummy pytorch model
export_model = torch.export.export_for_training(
model, example_inputs).module()
quantizer = XNNPACKQuantizer().set_global(get_symmetric_quantization_config(
is_per_channel=True,
))
prepared_model = prepare_qat_pt2e(export_model, quantizer)
quantized_model = convert_pt2e(prepared_model)
inp = torch.rand((32, 3, 384, 384))
print(inp.shape)
example_inputs = (inp,)
onnx_program = torch.onnx.export(
quantized_model, # model to export
example_inputs, # inputs of the model,
"my_model.onnx", # filename of the ONNX model
opset_version=20, # the ONNX version to export the model to
verbose=True,
input_names=["input"], # Rename inputs for the ONNX model
output_names=['output'], # the model's output names
dynamic=True, # create a dynamic ONNX model
dynamo=True, # True or False to select the exporter to use
dynamic_axes={'input': {0: 'batch_size'}, # variable length axes
'output': {0: 'batch_size'}},
verify=True, # check the model and all its submodules
)
```
we got the following error:
```pytb
---------------------------------------------------------------------------
DispatchError Traceback (most recent call last)
File ~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:553, in _add_nodes(exported_program, model, lower, registry)
[552](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:552) if lower == "at_conversion":
--> [553](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:553) _handle_call_function_node_with_lowering(
[554](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:554) model,
[555](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:555) node,
[556](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:556) node_name_to_values,
[557](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:557) constant_farm,
[558](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:558) registry=registry,
[559](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:559) opset=opset,
[560](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:560) )
[561](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:561) else:
[562](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:562) # No lowering
File ~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:444, in _handle_call_function_node_with_lowering(model, node, node_name_to_values, constant_farm, registry, opset)
[442](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:442) if onnx_function is None:
[443](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:443) # TODO(justinchuby): Fall back to ATen op or do something else?
--> [444](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:444) raise _errors.DispatchError(
[445](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:445) f"No ONNX function found for {node.target!r}. Failure message: {message}"
[446](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:446) )
[448](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:448) # Map FX inputs to ONNX inputs and fill optional inputs.
[449](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c696e7578576f726b53746174696f6e227d.vscode-resource.vscode-cdn.net/home/rxu/Github/rai_lab/multimodal/quantization/~/miniconda3/envs/ims/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py:449) # torch_args and torch_kwargs are for op-level validation
DispatchError: No ONNX function found for <OpOverload(op='quantized_decomposed.dequantize_per_channel', overload='default')>. Failure message: No decompositions registered for the real-valued input
...
<class 'torch.onnx._internal.exporter._errors.DispatchError'>: No ONNX function found for <OpOverload(op='quantized_decomposed.dequantize_per_channel', overload='default')>. Failure message: No decompositions registered for the real-valued input
⬆️
<class 'torch.onnx._internal.exporter._errors.ConversionError'>: Error when translating node %dequantize_per_channel : [num_users=1] = call_function[target=torch.ops.quantized_decomposed.dequantize_per_channel.default](args = (%b__frozen_param0, %b__scale_0, %b__zero_point_0, 0, -127, 127, torch.int8), kwargs = {}). See the stack trace for more information.
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro RTX 8000
GPU 1: Quadro RTX 8000
GPU 2: Quadro RTX 8000
GPU 3: Quadro RTX 8000
Nvidia driver version: 550.120
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) W-2295 CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2004.998
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-35
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.18.1
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
| true
|
2,762,094,866
|
Missing 'torch' wheel for version '1.8.2' in official index
|
hwhsu1231
|
closed
|
[
"module: binaries",
"oncall: releng"
] | 1
|
NONE
|
### 🐛 Describe the bug
Recently, I tried to install Torch [1.8.2](https://github.com/pytorch/pytorch/tree/v1.8.2) package with following command:
```bash
pip install torch==1.8.2 --index-url=https://download.pytorch.org/whl --progress-bar=off --verbose
```
However, it failed with the following error:
```bash
Using pip 22.3.1 from /home/hwhsu1231/Repo/testing/.venv/lib/python3.7/site-packages/pip (python 3.7)
Looking in indexes: https://download.pytorch.org/whl
ERROR: Could not find a version that satisfies the requirement torch==1.8.2 (from versions: 0.4.1, 0.4.1.post2, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.2.0+cpu, 1.2.0+cu92, 1.3.0, 1.3.0+cpu, 1.3.0+cu100, 1.3.0+cu92, 1.3.1, 1.3.1+cpu, 1.3.1+cu100, 1.3.1+cu92, 1.4.0, 1.4.0+cpu, 1.4.0+cu100, 1.4.0+cu92, 1.5.0, 1.5.0+cpu, 1.5.0+cu101, 1.5.0+cu92, 1.5.1, 1.5.1+cpu, 1.5.1+cu101, 1.5.1+cu92, 1.6.0, 1.6.0+cpu, 1.6.0+cu101, 1.6.0+cu92, 1.7.0, 1.7.0+cpu, 1.7.0+cu101, 1.7.0+cu110, 1.7.0+cu92, 1.7.1, 1.7.1+cpu, 1.7.1+cu101, 1.7.1+cu110, 1.7.1+cu92, 1.7.1+rocm3.7, 1.7.1+rocm3.8, 1.8.0, 1.8.0+cpu, 1.8.0+cu101, 1.8.0+cu111, 1.8.0+rocm3.10, 1.8.0+rocm4.0.1, 1.8.1+cpu, 1.8.1+cu101, 1.8.1+cu102, 1.8.1+cu111, 1.8.1+rocm3.10, 1.8.1+rocm4.0.1, 1.9.0+cpu, 1.9.0+cu102, 1.9.0+cu111, 1.9.0+rocm4.0.1, 1.9.0+rocm4.1, 1.9.0+rocm4.2, 1.9.1+cpu, 1.9.1+cu102, 1.9.1+cu111, 1.9.1+rocm4.0.1, 1.9.1+rocm4.1, 1.9.1+rocm4.2, 1.10.0+cpu, 1.10.0+cu102, 1.10.0+cu111, 1.10.0+cu113, 1.10.0+rocm4.0.1, 1.10.0+rocm4.1, 1.10.0+rocm4.2, 1.10.1+cpu, 1.10.1+cu102, 1.10.1+cu111, 1.10.1+cu113, 1.10.1+rocm4.0.1, 1.10.1+rocm4.1, 1.10.1+rocm4.2, 1.10.2+cpu, 1.10.2+cu102, 1.10.2+cu111, 1.10.2+cu113, 1.10.2+rocm4.0.1, 1.10.2+rocm4.1, 1.10.2+rocm4.2, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2)
ERROR: No matching distribution found for torch==1.8.2
WARNING: There was an error checking the latest version of pip.
```
After checking the index URL of [torch](https://download.pytorch.org/whl/torch), there is no `1.8.2` version of torch wheel indeed. However, there is [v1.8.2](https://github.com/pytorch/pytorch/tree/v1.8.2) tag in this repository. If this is an error from the PyTorch team, I hope a torch wheel for version `1.8.2` can be provided to address it.

The followings are the commands I used and the full log:
```bash
conda create python=3.7 --prefix ./.venv --yes
conda activate ./.venv
export PYTHONNOUSERSITE=1
pip install torch==1.8.2 --index-url=https://download.pytorch.org/whl --progress-bar=off --verbose
```
[log-failed-to-pip-install-torch-1-8-2.txt](https://github.com/user-attachments/files/18268389/log-failed-to-pip-install-torch-1-8-2.txt)
### Versions
- Python version: `3.7`
- PyTorch version: `1.8.2`
cc @seemethere @malfet @osalpekar @atalman
| true
|
2,762,060,575
|
RegisterCPU.cpp likely needs to be sharded
|
swolchok
|
closed
|
[
"module: build",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
We shard other generated files, but apparently not this one. I attempted to build PyTorch on Raspberry Pi 5 and came back hours later to an unresponsive Pi 5 with "Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o" on the screen (suggesting that the build is swapping due to requiring more RAM than available). I vaguely recall also noticing this being an issue when building on my x86 Windows PC -- the memory requirement to build this file was large enough that I had to increase WSL's memory limit.
### Alternatives
do nothing
### Additional context
_No response_
cc @malfet @seemethere
| true
|
2,762,059,678
|
[inductor] Make generated kernels deterministic
|
jansel
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143951
`"compile_id"` had slipped into our generated Triton code (in the
metadata), which will defeat caching because the same kernels generated
in a different order would not cache hit with eachother.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,762,058,344
|
Remove aten/src/ATen/core/Array.h
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
It's not used in OSS and not part of public API.
| true
|
2,761,980,859
|
[MPS] Fix `torch.add(x,y, alpha=2)` crash
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #143977
* #143973
* __->__ #143949
* #143948
TODO: as followup PR replace this weird logic with shaders
Fixes https://github.com/pytorch/pytorch/issues/143932
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,761,980,841
|
[MPS] Fix crash when mm is invoked with mixed dtypes
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #143977
* #143973
* #143949
* __->__ #143948
Simply by copy-n-pasting check from
https://github.com/pytorch/pytorch/blob/a7915c56f6a62266490be355b3d823b1e447a475/aten/src/ATen/native/cuda/Blas.cpp#L254-L257
| true
|
2,761,909,094
|
Add differentiable flag to SGD
|
EmmettBicker
|
open
|
[
"module: optimizer",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
When using SGD on cuda, it defaults to the foreach implementation. Because SGD is so simple, this often doesn't call any non differentiable operations, but when using weight_decay, it calls _foreach_add which is nondifferentiable.
Currently, to make cuda SGD w/ weight_decay differentiable, you need to pass in foreach=False which feels unintuitive, so adding a differentiable flag would be more clear.
### Alternatives
_No response_
### Additional context
I can probably do this one!
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,761,909,086
|
Support getattr for tensor subclasses in pre-dispatch export via patching tensor.getattr
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143946
Previous discussion: https://github.com/pytorch/pytorch/pull/143671#issuecomment-2560112499 and https://github.com/pytorch/pytorch/pull/143671
Differential Revision: [D67693609](https://our.internmc.facebook.com/intern/diff/D67693609)
| true
|
2,761,909,051
|
Fix subclass unwrapping bug
|
tugsbayasgalan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143946
* __->__ #143945
I noticed a small bug in tensor subclass unwrapping logic. cc @IvanKobzarev
It seems easier if we just implement it recursively so that it is easier to track the inner attrs to corresponding plain tensors and both aot_autograd and fake_tensor implement subclass unwrapping recursively.
Differential Revision: [D67693610](https://our.internmc.facebook.com/intern/diff/D67693610)
| true
|
2,761,815,361
|
remove allow-untyped-defs from _export/pass_infra/proxy_value.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: export"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143944
| true
|
2,761,815,337
|
remove allow-untyped-defs from onnx/_internal/_lazy_import.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143944
* __->__ #143943
| true
|
2,761,815,323
|
remove allow-untyped-defs from torch/_size_docs.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143944
* #143943
* __->__ #143942
| true
|
2,761,815,307
|
remove allow-untyped-defs from _inductor/compile_worker/watchdog.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143944
* #143943
* #143942
* __->__ #143941
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,761,806,414
|
Fix assertion failure in pytorch profiler
|
kadeng
|
closed
|
[
"fb-exported",
"oncall: profiler",
"Merged",
"ciflow/trunk",
"release notes: profiler",
"topic: bug fixes",
"topic: not user facing"
] | 55
|
CONTRIBUTOR
|
Summary:
Attempt to fix the following exception which occurred when profiling a Pytorch model ( Meta-internal LLM ) that also involved a ThreadPoolExecutor in the background:
```
Exception Found: !stack.empty() INTERNAL ASSERT FAILED at "fbcode/caffe2/torch/csrc/autograd/profiler_python.cpp":987, please report a bug to PyTorch. Python replay stack is empty.
```
The root cause of this issue seems to be that a thread call stack can be empty, which is asserted to not be empty.
I fixed this with some minimal changes to profiler_python.cpp
Approach:
* Ensuring that the stack in question is not empty before trying to pop from it.
Test Plan:
* Tested manually on a reproducible scenario where the assertion failure was otherwise triggered ( repro too large to include here ). The assertion failure disappears.
* CI
Differential Revision: D67691558
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,761,794,768
|
Add "enabled=True" argument to DistributedDataParallel.no_sync()
|
avihu111
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Stale",
"release notes: distributed (ddp)"
] | 8
|
NONE
|
The `ddp.no_sync(enabled=True)` allows easier implementation of gradient accumulation/syncing mechanisms and will help to prevent code duplications.
It is a small and backward-compatible change.
Additional Details in Issue #143721
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,761,787,624
|
Modify old differentiable optimizer tests to use optim_db
|
EmmettBicker
|
closed
|
[
"module: optimizer",
"module: tests",
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Fourth PR in a larger project to broaden differentiable optimizer support with @janeyx99 ! This one is the first step in Step 0.
This PR replaces the 14 old differentiable tester functions with one differentiable tester function that uses optim_db and assures that the gradient can flow through the optimizer wrt params for all accepted OptimizerInput values attributed to each OptimizerInfo.
It also revealed a potentially confusing case where SGD defaults to foreach on cuda, and as it doesn't have a differentiable flag, it just throws a normal error that makes it look like a differentiable optimizer isn't supported, so I'll create an issue that brings that up!
Also, I added one typehint and fixed another in common_optimizers. This might not be great for organization so I can move it into another PR if desired.
The tests used to take 0.7 seconds, but now that they run every OptimizerInput test, they now take around 7 seconds. Ever time it runs gradcheck, it prints out a warning for using a deprecated version of vmap in gradcheck (which also happened last time) -- I believe fixing this warning would cut a lot of time off the tests since it seems largely consumed by printing!
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @mruberry @ZainRizvi
| true
|
2,761,774,245
|
[BE][Ez]: Update fmtlib submodule to 1.11.1
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
* Exactly the same as previous fmtlib except it fixes an edgecase that could affect ABI compatibility between fmtlib versions.
* Seems safe to update
| true
|
2,761,632,431
|
pytorch v2.3.1 build failed - CUDA kernel function
|
lida2003
|
closed
|
[
"module: build",
"triaged",
"module: jetson",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention",
"module: sdpa"
] | 5
|
NONE
|
### 🐛 Describe the bug
pytorch v2.3.1 build for nvidia jetson orin nano 8GB failed - CUDA kernel function
After fixing [the memory(increased to 8GB sawp file) issue](https://github.com/pytorch/pytorch/issues/143856), I still can't compile the code. The issue seems to be related to a missing return statement in a CUDA kernel function, specifically in the file flash_bwd_hdim256_fp16_sm80.cu. But I didn't find any issue related to this.
So does anyone have met this beofre?
```
Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.4 [L4T 35.6.0]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
- P-Number: p3767-0005
- Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
- Distribution: Ubuntu 20.04 focal
- Release: 5.10.216-tegra
jtop:
- Version: 4.2.12
- Service: Active
Libraries:
- CUDA: 11.4.315
- cuDNN: 8.6.0.166
- TensorRT: 8.5.2.2
- VPI: 2.4.8
- OpenCV: 4.9.0 - with CUDA: YES
DeepStream C/C++ SDK version: 6.3
Python Environment:
Python 3.8.10
GStreamer: YES (1.16.3)
NVIDIA CUDA: YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
OpenCV version: 4.9.0 CUDA True
YOLO version: 8.3.33
Torch version: 2.1.0a0+41361538.nv23.06
Torchvision version: 0.16.1+fdea156
DeepStream SDK version: 1.1.8
```
```
$ git log -n 1
commit 63d5e9221bedd1546b7d364b5ce4171547db12a9 (HEAD, tag: v2.3.1)
Author: pytorchbot <soumith+bot@pytorch.org>
Date: Wed May 29 08:15:01 2024 -0700
[EZ] Pin scipy to 1.12 for Py-3.12 (#127322)
[EZ] Pin scipy to 1.12 for Py-3.12 (#123795)
This caused false positive failures/reverts for https://github.com/pytorch/pytorch/pull/123689 and https://github.com/pytorch/pytorch/pull/123595
Fixes https://github.com/pytorch/pytorch/issues/123655
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123795
Approved by: https://github.com/huydhn
(cherry picked from commit 2a597cfd2c63459dd303cf7922eb4c3750a76e75)
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
```
### Error logs
```
Building wheel torch-2.3.1
-- Building version 2.3.1
cmake --build . --target install --config Release
[1/620] Building CUDA object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/transformers/cuda/flash_attn/kernels/flash_bwd_hdim256_fp16_sm80.cu.o
FAILED: caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/transformers/cuda/flash_attn/kernels/flash_bwd_hdim256_fp16_sm80.cu.o
/usr/bin/ccache /usr/local/cuda/bin/nvcc -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DFLASHATTENTION_DISABLE_ALIBI -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_BUILD_MAIN_LIB -DUSE_CUDA -DUSE_EXTERNAL_MZCRC -DUSE_FLASH_ATTENTION -DUSE_MEM_EFF_ATTENTION -D_FILE_OFFSET_BITS=64 -Dtorch_cuda_EXPORTS -DTORCH_ASSERT_NO_OPERATORS -I/home/daniel/Work/pytorch_v2.3.1/build/aten/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build -I/home/daniel/Work/pytorch_v2.3.1 -I/home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/benchmark/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/aten/src/THC -I/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/cuda -I/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/../../../third_party/cutlass/include -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/.. -I/home/daniel/Work/pytorch_v2.3.1/c10/cuda/../.. -I/home/daniel/Work/pytorch_v2.3.1/c10/.. -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googlemock/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googletest/include -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/protobuf/src -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/XNNPACK/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/cudnn_frontend/include -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -D_GLIBCXX_USE_CXX11_ABI=1 -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_87,code=sm_87 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -O3 -DNDEBUG -std=c++17 -Xcompiler=-fPIC -D__NEON__ -Xcompiler=-Wall,-Wextra,-Wdeprecated,-Wno-unused-parameter,-Wno-unused-function,-Wno-missing-field-initializers,-Wno-unknown-pragmas,-Wno-type-limits,-Wno-array-bounds,-Wno-unknown-pragmas,-Wno-strict-overflow,-Wno-strict-aliasing,-Wno-maybe-uninitialized -Wno-deprecated-copy -MD -MT caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/transformers/cuda/flash_attn/kernels/flash_bwd_hdim256_fp16_sm80.cu.o -MF caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/transformers/cuda/flash_attn/kernels/flash_bwd_hdim256_fp16_sm80.cu.o.d -x cu -c /home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/kernels/flash_bwd_hdim256_fp16_sm80.cu -o caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/transformers/cuda/flash_attn/kernels/flash_bwd_hdim256_fp16_sm80.cu.o
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/utils.h(211): warning: missing return statement at end of non-void function "pytorch_flash::convert_layout_acc_Aregs<MMA_traits,Layout>(Layout) [with MMA_traits=cute::TiledMMA<std::conditional_t<true, cute::MMA_Atom<cute::SM80_16x8x16_F32F16F16F32_TN>, cute::MMA_Atom<cute::SM80_16x8x16_F32BF16BF16F32_TN>>, cute::Layout<cute::Shape<cute::Int<4>, cute::Int<2>, cute::_1>, cute::LayoutLeft::Apply<cute::Shape<cute::Int<4>, cute::Int<2>, cute::_1>>>, cute::Tile<cute::Int<64>, cute::Int<32>, cute::_16>>, Layout=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2>, cute::_1, cute::_4>, cute::tuple<cute::tuple<cute::_1, cute::_2>, cute::C<0>, cute::C<4>>>]"
detected during:
instantiation of "auto pytorch_flash::convert_layout_acc_Aregs<MMA_traits,Layout>(Layout) [with MMA_traits=cute::TiledMMA<std::conditional_t<true, cute::MMA_Atom<cute::SM80_16x8x16_F32F16F16F32_TN>, cute::MMA_Atom<cute::SM80_16x8x16_F32BF16BF16F32_TN>>, cute::Layout<cute::Shape<cute::Int<4>, cute::Int<2>, cute::_1>, cute::LayoutLeft::Apply<cute::Shape<cute::Int<4>, cute::Int<2>, cute::_1>>>, cute::Tile<cute::Int<64>, cute::Int<32>, cute::_16>>, Layout=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2>, cute::_1, cute::_4>, cute::tuple<cute::tuple<cute::_1, cute::_2>, cute::C<0>, cute::C<4>>>]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_kernel.h(543): here
instantiation of "void pytorch_flash::compute_dq_dk_dv_1colblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Is_first,Is_last,Seq_parallel,Params>(const Params &, int, int, int) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 64, 8, 4, 2, 2, false, false, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=false, Is_even_MN=false, Is_even_K=true, Is_first=false, Is_last=false, Seq_parallel=true, Params=pytorch_flash::Flash_bwd_params]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_kernel.h(824): here
instantiation of "void pytorch_flash::compute_dq_dk_dv_seqk_parallel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Params>(const Params &) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 64, 8, 4, 2, 2, false, false, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=false, Is_even_MN=false, Is_even_K=true, Params=pytorch_flash::Flash_bwd_params]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_launch_template.h(47): here
instantiation of "void pytorch_flash::flash_bwd_dq_dk_dv_loop_seqk_parallel_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K>(pytorch_flash::Flash_bwd_params) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 64, 8, 4, 2, 2, false, false, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 64, 8, cutlass::half_t>>, Is_dropout=true, Is_causal=true, Is_local=false, Has_alibi=false, Is_even_MN=false, Is_even_K=true]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_launch_template.h(98): here
instantiation of "void pytorch_flash::run_flash_bwd_seqk_parallel<Kernel_traits,Is_dropout>(pytorch_flash::Flash_bwd_params &, cudaStream_t) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 64, 8, 4, 2, 2, false, false, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 64, 8, cutlass::half_t>>, Is_dropout=true]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_launch_template.h(132): here
instantiation of "void pytorch_flash::run_flash_bwd<Kernel_traits,Is_dropout>(pytorch_flash::Flash_bwd_params &, cudaStream_t) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 64, 8, 4, 2, 2, false, false, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 64, 8, cutlass::half_t>>, Is_dropout=true]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_launch_template.h(324): here
instantiation of "void pytorch_flash::run_mha_bwd_hdim256<T>(pytorch_flash::Flash_bwd_params &, cudaStream_t) [with T=cutlass::half_t]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/kernels/flash_bwd_hdim256_fp16_sm80.cu(12): here
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/utils.h(211): warning: missing return statement at end of non-void function "pytorch_flash::convert_layout_acc_Aregs<MMA_traits,Layout>(Layout) [with MMA_traits=cute::TiledMMA<std::conditional_t<true, cute::MMA_Atom<cute::SM80_16x8x16_F32F16F16F32_TN>, cute::MMA_Atom<cute::SM80_16x8x16_F32BF16BF16F32_TN>>, cute::Layout<cute::Shape<cute::Int<4>, cute::Int<2>, cute::_1>, cute::LayoutLeft::Apply<cute::Shape<cute::Int<4>, cute::Int<2>, cute::_1>>>, cute::Tile<cute::Int<64>, cute::Int<32>, cute::_16>>, Layout=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2>, cute::_1, cute::_2>, cute::tuple<cute::tuple<cute::_1, cute::_2>, cute::C<0>, cute::C<4>>>]"
detected during:
instantiation of "auto pytorch_flash::convert_layout_acc_Aregs<MMA_traits,Layout>(Layout) [with MMA_traits=cute::TiledMMA<std::conditional_t<true, cute::MMA_Atom<cute::SM80_16x8x16_F32F16F16F32_TN>, cute::MMA_Atom<cute::SM80_16x8x16_F32BF16BF16F32_TN>>, cute::Layout<cute::Shape<cute::Int<4>, cute::Int<2>, cute::_1>, cute::LayoutLeft::Apply<cute::Shape<cute::Int<4>, cute::Int<2>, cute::_1>>>, cute::Tile<cute::Int<64>, cute::Int<32>, cute::_16>>, Layout=cute::Layout<cute::tuple<cute::tuple<cute::_2, cute::_2>, cute::_1, cute::_2>, cute::tuple<cute::tuple<cute::_1, cute::_2>, cute::C<0>, cute::C<4>>>]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_kernel.h(543): here
instantiation of "void pytorch_flash::compute_dq_dk_dv_1colblock<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Is_first,Is_last,Seq_parallel,Params>(const Params &, int, int, int) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 32, 8, 4, 1, 2, true, true, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 32, 8, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=false, Is_even_MN=false, Is_even_K=true, Is_first=false, Is_last=false, Seq_parallel=true, Params=pytorch_flash::Flash_bwd_params]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_kernel.h(824): here
instantiation of "void pytorch_flash::compute_dq_dk_dv_seqk_parallel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K,Params>(const Params &) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 32, 8, 4, 1, 2, true, true, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 32, 8, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=false, Is_even_MN=false, Is_even_K=true, Params=pytorch_flash::Flash_bwd_params]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_launch_template.h(47): here
instantiation of "void pytorch_flash::flash_bwd_dq_dk_dv_loop_seqk_parallel_kernel<Kernel_traits,Is_dropout,Is_causal,Is_local,Has_alibi,Is_even_MN,Is_even_K>(pytorch_flash::Flash_bwd_params) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 32, 8, 4, 1, 2, true, true, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 32, 8, cutlass::half_t>>, Is_dropout=false, Is_causal=true, Is_local=false, Has_alibi=false, Is_even_MN=false, Is_even_K=true]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_launch_template.h(98): here
instantiation of "void pytorch_flash::run_flash_bwd_seqk_parallel<Kernel_traits,Is_dropout>(pytorch_flash::Flash_bwd_params &, cudaStream_t) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 32, 8, 4, 1, 2, true, true, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 32, 8, cutlass::half_t>>, Is_dropout=false]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_launch_template.h(132): here
instantiation of "void pytorch_flash::run_flash_bwd<Kernel_traits,Is_dropout>(pytorch_flash::Flash_bwd_params &, cudaStream_t) [with Kernel_traits=pytorch_flash::Flash_bwd_kernel_traits<256, 64, 32, 8, 4, 1, 2, true, true, cutlass::half_t, pytorch_flash::Flash_kernel_traits<256, 64, 32, 8, cutlass::half_t>>, Is_dropout=false]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/flash_bwd_launch_template.h(324): here
instantiation of "void pytorch_flash::run_mha_bwd_hdim256<T>(pytorch_flash::Flash_bwd_params &, cudaStream_t) [with T=cutlass::half_t]"
/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/native/transformers/cuda/flash_attn/kernels/flash_bwd_hdim256_fp16_sm80.cu(12): here
Killed
```
### Versions
```
daniel@daniel-nvidia:~/Work/pytorch$ python3 collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 7 2024, 13:10:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.216-tegra-aarch64-with-glibc2.29
Is CUDA available: N/A
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Thread(s) per core: 1
Core(s) per socket: 3
Socket(s): 2
Vendor ID: ARM
Model: 1
Model name: ARMv8 Processor rev 1 (v8l)
Stepping: r0p1
CPU max MHz: 1510.4000
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 1.5 MiB
L3 cache: 2 MiB
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.3.12
[pip3] onnxruntime==1.16.3
[pip3] onnxruntime-gpu==1.17.0
[pip3] onnxslim==0.1.36
[pip3] optree==0.13.1
[pip3] torch==2.1.0a0+41361538.nv23.6
[pip3] torch2trt==0.5.0
[pip3] torchvision==0.16.1
[conda] Could not collect
```
cc @malfet @seemethere @ptrblck @puririshi98 @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,761,578,763
|
[MPS] Fix fmin/fmax for scalar argument
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143934
CPU scalar promotion to GPU is allowed for CUDA and shoudl be allowed for MPS as well (at the very least it should not crash)
Fixes https://github.com/pytorch/pytorch/issues/143933 https://github.com/pytorch/pytorch/issues/142203
| true
|
2,761,562,753
|
torch.fmax() between MPS tensor and CPU scalar crashes
|
malfet
|
closed
|
[
"module: crash",
"triaged",
"module: regression",
"module: mps"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Discovered while running `python ../test/inductor/test_torchinductor.py -v -k test_fmin_fmax_mps`
But could be reproduced as single line:
```
python -c "import torch;print(torch.fmax(torch.rand(7, device='mps'), torch.tensor(.3)))"
```
### Versions
2.5.1, nightly
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,761,534,399
|
`torch.add` between float and int crashes when alpha is specified
|
malfet
|
closed
|
[
"module: crash",
"triaged",
"module: mps"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Discovered while attempted to run test_torchinductor.py
```
% python test/inductor/test_torchinductor.py -v -k test_add_const_int_mps
test_add_const_int_mps (__main__.GPUTests.test_add_const_int_mps) ... (mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:233:0: error: 'mps.multiply' op requires the same element type for all operands and results
(mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:233:0: note: see current operation: %4 = "mps.multiply"(%2, %arg2) : (tensor<1xf32>, tensor<1xsi64>) -> tensor<*xf32>
(mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:233:0: error: 'mps.multiply' op requires the same element type for all operands and results
(mpsFileLoc): /AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm:233:0: note: see current operation: %4 = "mps.multiply"(%2, %arg2) : (tensor<1xf32>, tensor<1xsi64>) -> tensor<*xf32>
/AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:975: failed assertion `original module failed verification'
```
Or by running something like
```
python -c "import torch;print(torch.add(torch.rand(32, device='mps'), torch.arange(32, device='mps', dtype=torch.int32), alpha=2))"
```
### Versions
2.5.1, nightly
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,761,388,595
|
The model compiled with torch.compile encounters an error when run.
|
WangGewu
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
### 🐛 Describe the bug
I have an LLM and I used torch.compile to compile the function for each step of decoding. After that, I encapsulated a server-side interface using Flask. When I make two concurrent calls, I encounter an error as follows:
```
nknown:0: unknown: block: [1,0,0], thread: [32,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed. | 0/3633 [00:00<?, ?it/s]unknown:0: unknown: block: [1,0,0], thread: [33,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.
unknown:0: unknown: block: [1,0,0], thread: [34,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.unknown:0: unknown: block: [1,0,0], thread: [35,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.
unknown:0: unknown: block: [1,0,0], thread: [36,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.unknown:0: unknown: block: [1,0,0], thread: [37,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.
unknown:0: unknown: block: [1,0,0], thread: [38,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.unknown:0: unknown: block: [1,0,0], thread: [39,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.
....
unknown:0: unknown: block: [2,0,0], thread: [57,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.
unknown:0: unknown: block: [2,0,0], thread: [58,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.unknown:0: unknown: block: [2,0,0], thread: [59,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.
unknown:0: unknown: block: [2,0,0], thread: [60,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.unknown:0: unknown: block: [2,0,0], thread: [61,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.
unknown:0: unknown: block: [2,0,0], thread: [62,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.unknown:0: unknown: block: [2,0,0], thread: [63,0,0] Assertion `index out of bounds: 0 <= tmp7 < 1291` failed.
0%| | 1/3633 [00:00<02:30, 24.07it/s]Exception in thread Thread-10 (llm_job):
Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/fish-speech-v2/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
self.run() File "/home/ubuntu/anaconda3/envs/fish-speech-v2/lib/python3.10/threading.py", line 946, in run
self._target(*self._args, **self._kwargs) File "/mnt/md0/wy/workspace/tts/open_resource/llm_tts_server/llm_tts.py", line 101, in llm_job
for idx, token in enumerate(llm.llm(tts_text_token, spk_embedding)): File "/mnt/md0/wy/workspace/tts/open_resource/llm_tts_server/fish_speech/llm.py", line 36, in llm
for y in self.generate( File "/home/ubuntu/anaconda3/envs/fish-speech-v2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 56, in generator_context
response = gen.send(request) File "/home/ubuntu/anaconda3/envs/fish-speech-v2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 56, in generator_context
response = gen.send(request) File "/mnt/md0/wy/workspace/tts/open_resource/llm_tts_server/fish_speech/llm.py", line 274, in generate
for x in self.decode_n_tokens( File "/mnt/md0/wy/workspace/tts/open_resource/llm_tts_server/fish_speech/llm.py", line 206, in decode_n_tokens
if cur_token[0, 0, -1] == self.im_end_id:RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
**However, when I make a single concurrent call, there are no errors. Additionally, if I don't use torch.compile, there are also no errors when making two concurrent calls**
### Versions
version:
python:3.10.4
pytorch:torch2.1.0+cu118
compile code:
```python
decode_one_token = torch.compile(decode_one_token, mode="reduce-overhead", fullgraph=True)
```
server code:
```python
import os
import sys
import argparse
from loguru import logger
from flask import Flask, jsonify, request, Response
from flask_socketio import SocketIO, emit
import uvicorn
import base64
import torch
import torchaudio
import io
import threading
import time
import uuid
import numpy as np
import json
from werkzeug.utils import secure_filename
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append('{}/..'.format(ROOT_DIR))
from llm_tts import LLM_TTS
from constant import MAX_DEC_NUM, MIN_PROMPT_AUDIO_DUR, MAX_PROMPT_AUDIO_DUR, MIN_PROMPT_AUDIO_SAMPLE_RATE, MAX_TEXT_LENGTH
g_lock = threading.Lock()
g_current_dec_num = 0
app = Flask(__name__)
def buildResponse(outputs):
buffer = io.BytesIO()
result = []
for i, output in enumerate(outputs):
sample_rate = output['sample_rate']
result.append(output['tts_speech'])
torchaudio.save(buffer, torch.cat(result, dim = 1), sample_rate, format="wav")
buffer.seek(0)
return Response(buffer.getvalue(), mimetype="audio/wav")
def audio_to_pcm_int16(audio: np.ndarray, sample_rate: int = None) -> bytes:
'''
`audio` is expected to have values in range [-1.0, 1.0] in float format.
'''
audio = (audio * np.iinfo(np.int16).max).astype(np.int16)
return audio.tobytes()
@app.route("/inference_tts", methods=['POST'])
def inference_tts():
request_data = request.get_json()
tts_text = request_data.get("tts_text")
spk_id = `request_data.get("spk_id")`
if not tts_text or not spk_id:
return jsonify({"error": "Missing 'tts_text', 'prompt_text' or 'prompt_wav' in request data"}), 400
request_id = str(uuid.uuid4())
model_output = llm_tts.tts(tts_text, spk_id)
response = buildResponse(model_output)
return response
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--port',
type=int,
default=50000)
parser.add_argument('--config_path',
type=str,
default="config/config.yaml",
help='model dir')
args = parser.parse_args()
llm_tts = LLM_TTS(args.config_path)
# warmup
outputs = llm_tts.tts("This is a test case.", "female")
for i, output in enumerate(outputs):
pass
app.run(host="0.0.0.0", port=args.port, debug=False)
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,761,372,603
|
[Torch.package] Add support for UntypedStorage tensors
|
henryhu6
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: package/deploy"
] | 11
|
CONTRIBUTOR
|
Summary: fp8 uses untyped storage. Add support for torch.package by using the same logic as in serialization.py
Differential Revision: D67684033
| true
|
2,761,361,576
|
[Codemod][AddExplicitStrictExportArg] caffe2/test/inductor
|
gmagogsfm
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Differential Revision: D67682313
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,761,351,564
|
Fix an unnecessary CPU to GPU copy within flex_attention
|
VivekPanyam
|
closed
|
[
"module: nn",
"triaged",
"open source",
"Stale",
"module: flex attention"
] | 5
|
CONTRIBUTOR
|
There are a few unnecessary CPU to GPU copies within flex_attention that cause unnecessary `cudaStreamSynchronize`s to occur. This decreases GPU utilization.
The existing code creates a `-inf` tensor on CPU and copies it to GPU (along with a synchronize).
The updated code no longer causes a cudaStreamSynchronize (validated with local profiling).
See #143927 for more details.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,761,350,605
|
Unnecessary CPU to GPU copies within flex_attention
|
VivekPanyam
|
closed
|
[
"triaged",
"oncall: pt2",
"module: pt2-dispatcher",
"module: flex attention"
] | 3
|
CONTRIBUTOR
|
There are a few unnecessary CPU to GPU copies within flex_attention that cause unnecessary `cudaStreamSynchronize`s to occur. This decreases GPU utilization.
## Note
The below is based on a profiling run without `torch.compile`. I haven't looked at profiles of the compiled version in depth yet, but based on a quick glance, the second issue with `index_put_` seems to go away when compiled.
(My compiled tests were run with a patch for the first issue below so the results don't confirm whether the patch is necessary or if `compile` fixes it)
## Example 1
This one happens during every execution of attention:
https://github.com/pytorch/pytorch/blob/fe398de769480ceb943d3ae37551a29954bbb147/torch/_higher_order_ops/flex_attention.py#L189
That line creates a `-inf` tensor on CPU and copies it to GPU (along with a synchronize). It should be replaced with
```py
torch.full((), -float("inf"), dtype=working_precision, device=scores.device),
```
I put up a PR to fix it: #143928
## Example 2
In the following code, several `cudaStreamSynchronize`s happen within `aten::index_put_` (the `dense_mask[row_indices, valid_indices] = 1 ` line).
https://github.com/pytorch/pytorch/blob/fe398de769480ceb943d3ae37551a29954bbb147/torch/nn/attention/flex_attention.py#L152-L166
My profiling shows 3 `cudaStreamSynchronize`s per `create_dense_one` call (which is called multiple times in a single `create_block_mask` call). I was able to decrease it to 2 synchronizes by replacing
https://github.com/pytorch/pytorch/blob/fe398de769480ceb943d3ae37551a29954bbb147/torch/nn/attention/flex_attention.py#L165
with
```py
dense_mask[row_indices, valid_indices] = torch.ones((), dtype=torch.int32, device=device)
```
The other two synchronizes are also within `aten::index_put_`, but they didn't jump out to me based on a cursory scan of the code.
cc @zou3519 @bdhirsh @penguinwu @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng @chauhang @ydwu4
| true
|
2,761,336,059
|
[dynamo] Separate out GetItemSource and DictGetItemSource
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143997
* __->__ #143926
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,761,279,040
|
[export] Support module inputs for non strict mode.
|
zhxchen17
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 15
|
CONTRIBUTOR
|
Summary:
Add experimental support for torch.nn.Module as input types.
Before this change, we don't support module inputs but recently we saw some interesting use cases like gpt-fast https://github.com/pytorch-labs/gpt-fast/blob/main/generate.py#L68 where we directly pass in a module input for different variants of the same models.
Since we don't really care about non-param or non-buffer states in non strict mode, we don't care about those either and pretend they are like plain constants during tracing. We treat any module input like a nested container of tensor, and each time we will automatically register a pytree handler for these module types to flatten its state dict into a group of tensors. We will just inline any module method call during tracing like we did for `self` module in export_for_training. This will make input modules' behavior very similar to the training module in typical case, except that we don't record the inputs as parameter or buffers but rather just plain user inputs.
Test Plan: buck run mode/opt caffe2/test:test_export -- -r test_module_input
Differential Revision: D67680827
| true
|
2,761,235,921
|
[dynamo] Make ConstDictKeySource a subclass of ChainedSource
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143926
* __->__ #143924
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.