id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,779,156,470
|
Expose several APIs to public (torch python APIs)
|
dilililiwhy
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 18
|
CONTRIBUTOR
|
Fixes #144302
Try to expose several APIs to public for privateuse1 scenario.
cc @albanD
| true
|
2,779,114,690
|
Use structure binding
|
cyyever
|
closed
|
[
"module: cpu",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: mps",
"ciflow/mps",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,779,097,589
|
Update the accuracy results for moco and llama
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 3
|
CONTRIBUTOR
|
This has been failing in trunk for sometimes, let's just update the accuracy results first. The command I run `python benchmarks/dynamo/ci_expected_accuracy/update_expected.py 127f836881e75e0c688619b54a35b018a69d7ee7`. I also fix the update script a bit to make it working after https://github.com/pytorch/pytorch/pull/139337
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,779,047,925
|
quantize_fx.prepare_qat_fx, `get_default_qat_qconfig_mapping` is unused in code.
|
LukeLIN-web
|
open
|
[
"module: docs",
"oncall: quantization"
] | 1
|
NONE
|
### 📚 The doc issue
https://pytorch.org/docs/stable/_modules/torch/ao/quantization/quantize_fx.html#prepare_qat_fx
```
from torch.ao.quantization import get_default_qat_qconfig_mapping
```
In Example, It is unused in code. Not sure which one is correct.
### Suggest a potential alternative/fix
```
from torch.ao.quantization import get_default_qconfig
```
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,779,044,988
|
WIP make a test for FSDP mixed precision
|
wconstab
|
closed
|
[
"oncall: distributed",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144521
* #144426
* #144352
* #144345
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
2,779,041,704
|
[aoti] Remove example inputs from aoti_compile_and_package
|
angelayi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary: The args were removed in https://github.com/pytorch/pytorch/pull/140991
Test Plan: CI
Differential Revision: D67998954
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,779,039,386
|
[ROCm][Inductor][CK] hackfix for segfault in addmm op
|
tenpercent
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
This snippet used to cause segfault on GPU due to incorrect input order when invoking the kernel
```
import os
import torch
import torch.nn as nn
from torch._inductor import config as inductor_config
from torch._inductor.utils import fresh_inductor_cache
M, N, K = 128, 128, 4096
dtype = torch.float16
X = torch.randn(M, N, dtype=dtype).cuda()
A = torch.randn(M, K, dtype=dtype).cuda()
B = torch.randn(K, N, dtype=dtype).cuda()
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, b, x, y):
return torch.addmm(b, x, y)
import ck4inductor
ck_dir = os.path.dirname(ck4inductor.__file__)
with fresh_inductor_cache():
with inductor_config.patch(
{
"max_autotune_gemm_backends": "CK",
"autotune_fallback_to_aten": False,
"compile_threads": 144,
"rocm.ck_dir": ck_dir,
}
):
compiled_model = torch.compile(SimpleModel(), mode="max-autotune")
res = compiled_model(X, A, B)
res_eager = torch.addmm(X, A, B)
torch.testing.assert_close(res, res_eager)
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,779,035,532
|
[minifier] Fix config generator for callables
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13
|
CONTRIBUTOR
|
Summary:
When config contains callables, the current configs generated cannot be run:
```
torch._dynamo.config.reorderable_logging_functions = {<built-in function print>, <function warning at 0x7f774c595630>, <function log at 0x7f774c595870>, <function error at 0x7f774c595510>, <function info at 0x7f774c595750>, <built-in function warn>, <function exception at 0x7f774c5955a0>, <function debug at 0x7f774c5957e0>, <function critical at 0x7f774c5953f0>}
```
We fix the config to generate the right string, so the config is runnable, like below
```
import logging
import warnings
torch._dynamo.config.reorderable_logging_functions = { warnings.warn, logging.warn, print }
```
Test Plan:
```
buck2 run 'fbcode//mode/dev-nosan' fbcode//caffe2/test:utils -- -r test_codegen_config
```
Differential Revision: D67998703
| true
|
2,779,022,382
|
Cleanup gpt_fast benchmark
|
nmacchioni
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This is an exact copy of https://github.com/pytorch/pytorch/pull/144484, I bricked the last PR running ghstack land :(
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144517
| true
|
2,779,007,244
|
[dynamo, nested graph breaks] add nested graph break tests
|
williamwen42
|
open
|
[
"module: dynamo",
"ciflow/inductor"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #151056
* __->__ #144516
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,778,997,092
|
[while_loop] specialize when cond_fn return constants
|
ydwu4
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144515
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,778,985,134
|
Custom operation defined in C++, fails op_check and breaks on compile() and dynamo - only in real build
|
borisfom
|
closed
|
[
"triaged",
"module: custom-operators",
"oncall: pt2"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Here is what I get :
Custom operation defined in C++, fails op_check:
```
E torch.testing._internal.optests.generate_tests.OpCheckError: opcheck(op, ...): test_aot_dispatch_static failed with The tensor has a non-zero number of elements, but its data is not allocated yet.
E If you're using torch.compile/export/fx, it is likely that we are erroneously tracing into a custom kernel. To fix this, please wrap the custom kernel into an opaque custom op. Please see the following for details: https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html
E If you're using Caffe2, Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory. (scroll up for stack trace)
```
If I strip down all real functionality, it passes (see commented out block below)
If I literally copy-paste that stripped-down implementation to create a small repro, I can't adequately model calling CUDA kernel from forward - see comments:
Here, op.cpp:
```
#include <torch/library.h>
#include <torch/torch.h>
#include <cstdint>
#include <stdexcept>
#include <string>
using namespace torch::autograd;
using at::ScalarType;
using torch::Tensor;
static Tensor _segmented_transpose_fwd_op(const Tensor& tensor,
const Tensor& segment_info,
int64_t input_contiguous_as_info
)
{
// Just returning tensor back works fine:
return tensor;
// Fails if it's like:
// ret = torch::empty_like(tensor);
// <apply CUDA kernel to ret, tensor ...
// return ret;
}
class SegmentedTranspose : public Function<SegmentedTranspose> {
public:
// Forward pass
static Tensor forward(AutogradContext* ctx,
const Tensor& tensor,
const Tensor& segment_info,
int64_t input_contiguous_as_info)
{
auto _tensor = tensor.is_contiguous() ? tensor : tensor.contiguous();
auto _segment_info = segment_info.is_contiguous() ? segment_info : segment_info.contiguous();
// Save the constant for use in the backward pass
ctx->saved_data["segment_info"] = _segment_info;
ctx->saved_data["input_contiguous_as_info"] = input_contiguous_as_info;
// Actual forward
return _segmented_transpose_fwd_op(_tensor,
_segment_info,
input_contiguous_as_info);
}
// Backward pass
static tensor_list backward(AutogradContext* ctx, tensor_list grad_outputs)
{
// Retrieve the saved constant
auto tensor = grad_outputs[0];
auto segment_info = ctx->saved_data["segment_info"].toTensor();
auto input_contiguous_as_info = ctx->saved_data["input_contiguous_as_info"].toInt();
auto currentStream = c10::cuda::getCurrentCUDAStream();
// using the whole new op here to enable double backward
Tensor grad_input = SegmentedTranspose::apply(tensor, segment_info, !input_contiguous_as_info);
return {grad_input, Tensor(), Tensor()}; // Return gradients for inputs
}
};
static Tensor _segmented_transpose_autograd(
const Tensor& tensor,
const Tensor& segment_info,
int64_t input_contiguous_as_info
)
{
return SegmentedTranspose::apply(tensor, segment_info, input_contiguous_as_info);
}
TORCH_LIBRARY_FRAGMENT(cuequivariance_ops_torch, m)
{
// Define an operator schema
m.def(
"segmented_transpose_primitive(Tensor tensor, Tensor segment_info, int "
"input_contiguous_as_info) -> Tensor");
}
TORCH_LIBRARY_IMPL(cuequivariance_ops_torch, CUDA, m)
{
m.impl("segmented_transpose_primitive", _segmented_transpose_fwd_op );
}
TORCH_LIBRARY_IMPL(cuequivariance_ops_torch, AutogradCUDA, m)
{
m.impl("segmented_transpose_primitive", _segmented_transpose_autograd);
}
```
Test:
```
import torch
# Load the shared library
torch.ops.load_library("build/libcustom_operator.so")
@torch.library.register_fake("cuequivariance_ops_torch::segmented_transpose_primitive")
def _(a: torch.Tensor, b: torch.Tensor, c: int) -> torch.Tensor:
return torch.empty_like(a)
# Use the custom operator
a = torch.tensor([1.0, 2.0], device="cuda", requires_grad=True)
b = torch.tensor([3.0, 4.0], device="cuda", requires_grad=True)
c = 5
class Mul(torch.nn.Module):
def forward(self, a: torch.Tensor, b: torch.Tensor, c: int) -> torch.Tensor:
return torch.ops.cuequivariance_ops_torch.segmented_transpose_primitive(a, b, c);
from torch.library import opcheck
ret = opcheck(torch.ops.cuequivariance_ops_torch.segmented_transpose_primitive,(a, b, c))
print(ret)
mul = Mul()
# result = mul(a, b, c)
# print(result) # Output: tensor([20., 30.], device='cuda:0', grad_fn=<MulBackward1>)
# mul = torch.export.export(mul, (a, b, c)).module()
mul = torch.compile(mul)
result = mul(a, b, c)
print(result) # Output:
result.sum().backward()
print(a.grad, b.grad)
```
CMakeLists.txt:
```
cmake_minimum_required(VERSION 3.10)
# Project name and version
project(CustomOperator LANGUAGES CXX)
# Set C++ standard
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
# Find PyTorch package
find_package(Torch REQUIRED)
# Include directories
include_directories(${TORCH_INCLUDE_DIRS})
# Add source files
add_library(custom_operator SHARED src/op.cpp)
# Link against Torch library
target_link_libraries(custom_operator "${TORCH_LIBRARIES}")
# Set RPATH for shared library loading
set_target_properties(custom_operator PROPERTIES INSTALL_RPATH "${TORCH_INSTALL_PREFIX}/lib")
```
Does that ring any bells ? Id does look like it tries to trace where it should not - how to prevent that ?
### Versions
Pytorch nightly
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225
| true
|
2,778,968,204
|
[docs] Minor fixes to export and aoti docs
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,778,967,902
|
[BE][Opinfo] Delete redundant `dtypesIfCUDA`
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
If they are the same as CPU, no need to have that extra line
Discovered while reviewing https://github.com/pytorch/pytorch/pull/143833
| true
|
2,778,959,601
|
Illegal Memory Access when Using Trainable Biases in Flex Attention
|
cora-codes
|
open
|
[
"module: crash",
"module: cuda",
"triaged",
"module: flex attention"
] | 3
|
NONE
|
### 🐛 Describe the bug
I'm back with another insane flex attention bug report.
Recently, I was playing around with the learnable biases in flex attention when I started hitting an illegal memory access in the backward pass.
After using `CUDA_LAUNCH_BLOCKING` I found it was happening in the following kernel during autotuning:
```python
File "/tmp/torchinductor_ubuntu/jy/cjy56z23prjyls6tnwn4ay4mmmb6vvergqxm4wmnv5l7zlfzk66e.py", line 3784, in call
triton_poi_fused_zeros_1.run(buf0, 100663296, grid=grid(100663296), stream=stream0)
```
```python
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
@triton_heuristics.pointwise(
size_hints={'x': 134217728},
filename=__file__,
triton_meta={'signature': {'out_ptr0': '*fp32', 'xnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, multi_processor_count=114, cc=90, major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1), equal_to_1=())]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_zeros_1', 'mutated_arg_names': [], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 0, 'num_reduction': 0, 'backend_hash': 'BC5F52D6E7923B9DC1733AF7005D933F35B86508E4142BC7F067F48E9C59404B', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_zeros_1(out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 100663296
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = tl.full([XBLOCK], True, tl.int1)
x0 = xindex
tmp0 = 0.0
tl.store(out_ptr0 + (x0), tmp0, None)
```
What might be also related is the following inductor warnings beforehand:
```
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] Failed to reorder for topological_sort_lpmf: SchedulerBuffer(scheduler=<torch._inductor.scheduler.Scheduler object at 0x750d4019caf0>, node=ComputedBuffer(name='buf0', layout=FixedLayout('cuda:0', torch.float32, size=[1, 1, 384, 256], stride=[98304, 98304, 1, 384]), data=Reduction(
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] 'cuda',
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] torch.float32,
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] def inner_fn(index, rindex):
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] _, _, i2, i3 = index
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] r0_0 = rindex
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp0 = ops.load(tangents_1, i2 + 384 * ModularIndexing(r0_0 + 256 * i3, 1, 4096) + 1572864 * ModularIndexing(r0_0 + 256 * i3, 4096, 16))
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp1 = ops.load(add_24, i2 + 384 * ModularIndexing(r0_0 + 256 * i3, 1, 4096) + 1572864 * ModularIndexing(r0_0 + 256 * i3, 4096, 16))
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp2 = ops.load(rsqrt_6, 4096 * ModularIndexing(r0_0 + 256 * i3, 4096, 16) + ModularIndexing(r0_0 + 256 * i3, 1, 4096))
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp3 = tmp1 * tmp2
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp4 = tmp0 * tmp3
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] return tmp4
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] ,
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] ranges=[1, 1, 384, 256],
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] reduction_ranges=[256],
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] reduction_type=sum,
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] origin_node=None,
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] origins=OrderedSet([sum_21, mul_52, mul_49])
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] )), defining_op=SchedulerNode(name='op0'), users=[NodeUser(node=SchedulerNode(name='op1'), can_inplace=False, is_weak=False), NodeUser(node=SchedulerNode(name='op27'), can_inplace=False, is_weak=False)], mpi_buffer=MemoryPlanningInfoForBuffer(size_alloc=0, size_free=0, succ_nodes=OrderedSet([])))
```
I know this likely isn't enough to fix the issue and I'd love to provide as repo as soon as possible, I'm just having trouble tracing down what exactly is going wrong here.
### Versions
2.7.0.dev20250109+cu126
cc @ptrblck @msaroufim @eqy @Chillee @drisspg @yanboliang @BoyuanFeng @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225
| true
|
2,778,955,241
|
[BE] Fix extra-semi warnings in int4mm_kernel.cpp
|
malfet
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes
```
In file included from /Users/nshulga/git/pytorch/pytorch/build/aten/src/ATen/native/cpu/int4mm_kernel.cpp.DEFAULT.cpp:1:
/Users/nshulga/git/pytorch/pytorch/aten/src/ATen/native/cpu/int4mm_kernel.cpp:998:2: warning: extra ';' outside of a function is incompatible with C++98 [-Wc++98-compat-extra-semi]
};
^
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,778,942,148
|
[MPSInductor] Add dummy properties
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144796
* #144795
* __->__ #144509
For compute capabilitiy (which is an empty string, same as CPU)
And for multicore count return 8, as this is smallest number of GPU cores on Apple silicon
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,778,941,223
|
Save integral tensor data for ET
|
shengfukevin
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 22
|
CONTRIBUTOR
|
Summary:
et_replay uses random data to run operators, however, the operators using index tensor to access memory won't work with random data. It usually ran into two exceptions: 1. illegal memory access since index is out of range, it has been fixed with the environment variable ENABLE_PYTORCH_EXECUTION_TRACE_SAVE_INTEGRAL_TENSOR_RANGE to record the min/max value of index tensors. 2. unaligned memory access, FBGEMM ops have speical requirements for the memory layout.
To fix the second execption, ENABLE_PYTORCH_EXECUTION_TRACE_SAVE_INTEGRAL_TENSOR is added to allow user to specify the node names, separated by comma, so ET will save the integral tensor data for these nodes. The saved data will be used in et_replay.
Be careful to turn on this option since it will use more space to save the extra data.
Test Plan: buck2 run mode/opt caffe2/test:test_profiler_cuda -- profiler.test_execution_trace.TestExecutionTraceCUDA.test_execution_trace_record_integral_tensor_data_cuda
Differential Revision: D67989856
| true
|
2,778,910,682
|
blocked benchmarking to avoid queue limit
|
nmacchioni
|
closed
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144507
* #144505
* #144501
* #144353
* #133287
* #144365
* #133121
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,778,874,645
|
Add instantiation level to CutlassArgs
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/slow"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144506
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,778,871,575
|
better overlapping of sleep and memory warmup
|
nmacchioni
|
closed
|
[
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144507
* __->__ #144505
* #144501
* #144353
* #133287
* #144365
* #133121
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,778,835,866
|
Confusing documentation for torch.bucketize
|
blaine-rister
|
closed
|
[
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 3
|
CONTRIBUTOR
|
### 📚 The doc issue
I was reading through this page describing `torch.bucketize`, and it took me a few passes to understand the behavior of the `right` argument.
https://pytorch.org/docs/stable/generated/torch.bucketize.html
It says:
> right ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – if False, return the first suitable location that is found. If True, return the last such index. If no suitable index found, return 0 for non-numerical value (eg. nan, inf) or the size of boundaries (one pass the last index). In other words, if False, gets the lower bound index for each value in input from boundaries. If True, gets the upper bound index instead. Default value is False.
This seems to suggest that there are multiple suitable buckets for a value. In reality, there is only ever one suitable bucket, and `right` defines the behavior when `input` equals `boundaries`. The previous section describes it much more clearly.
> right | returned index satisfies
> -- | --
> False | boundaries[i-1] < input[m][n]...[l][x] <= boundaries[i]
> True | boundaries[i-1] <= input[m][n]...[l][x] < boundaries[i]
### Suggest a potential alternative/fix
Would it be more precise to say something like this?
> right ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – controls how buckets are assigned to values in `boundaries`. Let's say `input[i] == boundaries[j]` for some indices `i` and `j`. If `right == False`, `out[i] == j`. Else, `out[i] == j+1`.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @albanD
| true
|
2,778,832,136
|
Release 2.6.0 validations checklist and cherry-picks
|
atalman
|
closed
|
[
"oncall: releng",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Similar to: https://github.com/pytorch/pytorch/issues/134694
We need to make sure that:
- [x] Python 3.13 wheel validate - https://github.com/pytorch/test-infra/actions/runs/12898326715/job/35965201816#step:14:3798
- [x] Validate Metadata section of wheels - make sure python versions are set
- [x] PyTorch 2.5.0 exposes statically linked libstdc++ CXX11 ABI symbols : https://github.com/pytorch/pytorch/issues/133437
- [ ] CUDA
- [x] pypi binaries with slimmed dependencies are usable in standard AWS containers 2023 regression in 1.13 - https://github.com/pytorch/test-infra/actions/runs/12936813683/job/36083269755#step:14:125
- [ ] Check cuda 1.12.1 update issue: https://github.com/pytorch/pytorch/issues/94772 with small wheels . Passes on GPU but failing on CPU, new issue: https://github.com/pytorch/pytorch/issues/145801
- [ ] `torch.compile`
- [x] Basic test works (for example see test mentioned in https://github.com/openai/triton/pull/1176 ) in PyTorch docker container
- [x] `torch.compile` raises an error if used on Windows. Test (part of torchvision): https://github.com/pytorch/test-infra/actions/runs/12935566485/job/36079281843#step:9:486
- [x] `torch.compile` works on 3.13 : Test: https://github.com/pytorch/test-infra/actions/runs/12873664387/job/35891677345#step:14:3604
- [x] `torch.compile` raises error on 3.13t: https://github.com/pytorch/test-infra/actions/runs/12873664387/job/35891678653#step:14:2811
- MPS
- [x] Resnet is usable out of the box (https://github.com/pytorch/test-infra/actions/runs/12898326715/job/35965216838#step:9:1996)
- Is torchvision usable? True German shepherd (cpu): 37.6% German shepherd (mps): 34.1%
- [x] Validate docker release builds
Issues/Milestone validation
- [x] https://github.com/pytorch/pytorch/issues/137597
- [x] https://github.com/pytorch/pytorch/issues/140797 @atalman
- [x] https://github.com/pytorch/pytorch/pull/144358 @justinchuby
- [ ] https://github.com/pytorch/pytorch/pull/143242 @jithunnair-amd
- [x] https://github.com/pytorch/pytorch/issues/142203 @atalman
- [x] https://github.com/pytorch/pytorch/issues/143933 @atalman
- [x] https://github.com/pytorch/pytorch/issues/142266 @atalman
- [x] https://github.com/pytorch/pytorch/issues/141909 @malfet
- [x] https://github.com/pytorch/pytorch/issues/142344 @atalman
- [x] https://github.com/pytorch/pytorch/issues/141770 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/141046 @kit1980
- [x] https://github.com/pytorch/pytorch/issues/139722 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/142448 @atalman
- [x] https://github.com/pytorch/pytorch/pull/142113 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/141230 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/141949 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/142043 @atalman
- [x] https://github.com/pytorch/pytorch/pull/141948 @atalman
- [x] https://github.com/pytorch/pytorch/pull/141800 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/141333 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/141471 @atalman
- [x] https://github.com/pytorch/pytorch/issues/135867 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/141569 @justinchuby
- [x] https://github.com/pytorch/pytorch/pull/141658 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138049 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/140873 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/140865 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/137886 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/141729 @justinchuby
- [x] https://github.com/pytorch/pytorch/pull/141413 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/141260 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/141080 @justinchuby
- [x] https://github.com/pytorch/pytorch/pull/137428 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/137374 @atalman
- [x] https://github.com/pytorch/pytorch/issues/138340 @drisspg
- [x] https://github.com/pytorch/pytorch/pull/137966 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138802 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/134666 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138186 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138727 @chuanqi129
- [ ] https://github.com/pytorch/pytorch/pull/138354 @nWEIdia
- [x] https://github.com/pytorch/pytorch/issues/138391 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/136559 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137338 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138992 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/138478 @kit1980
- [x] https://github.com/pytorch/pytorch/issues/138851 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137394 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138189 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138543 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/138331 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/137890 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137889 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137745 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137267 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137199 @yifuwang
- [ ] https://github.com/pytorch/pytorch/issues/134929 @atalman
Amazon Linux 2023
- [x] https://github.com/pytorch/pytorch/issues/138482
- [x] https://github.com/pytorch/pytorch/issues/144433 - https://github.com/pytorch/test-infra/actions/runs/12936813683/job/36083269755#step:14:125
XPU Binaries Validations:
- [x] https://github.com/pytorch/pytorch/issues/145290
### Cherry-Picks to validate
### Versions
2.6.0
| true
|
2,778,830,393
|
Remove is_reduced_floating_point from namespace std
|
swolchok
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144502
Partial fix for #144495. Avoiding BC-break using existing practice of removing only if FBCODE_CAFFE2 and C10_NODEPRECATED are not defined.
Differential Revision: [D67992342](https://our.internmc.facebook.com/intern/diff/D67992342/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,778,815,139
|
add most basic event packing
|
nmacchioni
|
closed
|
[
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144507
* #144505
* __->__ #144501
* #144353
* #133287
* #144365
* #133121
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,778,814,887
|
[MPSInductor] Fix `masked`/`where` for inf values
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Move constant to value logic to `value_to_metal` function (similar to `value_to_cpp`)
Call it from `constant` as well as `where` ops (which is in turn being called from `masked` op
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,778,797,722
|
Add AOTAutogradCache support for cache hot loading APIs
|
jamesjwu
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144499
This diff adds AOTAutogradCache support to the mega cache.
Differential Revision: [D67991059](https://our.internmc.facebook.com/intern/diff/D67991059/)
**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D67991059/)!
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,778,792,558
|
[PGNCCL] Add an API to get the status/error code at the PG level
|
shuqiangzhang
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 11
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144498
Summary:
This PR is basically a replacement of
https://github.com/pytorch/pytorch/pull/140087, which caused some perf
drop due to frequent TCPStore check in watchdog thread. The fix is to move the
tcpstore check in monitoring thread
If unhealthy, the user should be able to get the type of errors, e.g.,
timeout,nccl error or remote error.
This API is applied to PG level, compared to the
work.get_future_result() API which is applied to Work Level.
Error detection at PG level is much more convenient for users to handle
the PG failure as a whole, e.g, restarting the PG.
Error handling at the work level is still useful for users to attach
work specific context and debug the RC of the specific failing
work/collective
Note it is critical for all ranks in the PG to be notified about an
error as soon as it occurs, so we introduce an errorType of
REMOTE_ERROR, which is 'broadcasted' from a src rank (which detects a
local error) to all other ranks in the PG, the broadcast is done through
TCPStore currently
Tags:
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,778,787,069
|
[Inductor] Restrict ND tiling analysis to MemoryDeps
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
# Issue
https://github.com/pytorch/pytorch/pull/137243 introduced a feature where the ND tiling algorithm analyzes memory dependencies. It iterates over all `Dep`'s of the kernel. However, the analysis is only applicable to `MemoryDep` instances, which are a subclass of `Dep`. In particular, it doesn't work for `StarDep`'s, for the reasons described here: https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/simd.py#L1653
# Fix
This PR changes the algorithm to only iterate over `MemoryDep` instances.
# Testing
Parameterized an existing test for `torch.bucketize` to also run with ND tiling. This test emits a node with `StarDep`'s. Without this PR, the compiler would crash on this test case.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,778,778,587
|
[c10d] Fix CudaEventCache for dangling references
|
fduwjj
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"topic: bug fixes"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144496
Reported in https://github.com/pytorch/pytorch/issues/143470, we have a dangling references in `CudaEventCache`. So we want to fix it.
1. We add a unit test to repro the issue mentioned in the issue.
2. Instead of converting variables to shared pointers as suggested in the issue, we then make the cache itself a shared pointer. So if the thread creates the cache dies before all events get recycled, the cache is still there until the last CudaEvent get deleted. (thanks for the suggestion from @kwen2501 )
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,778,776,951
|
c10/util/BFloat16-math.h has undefined behavior
|
swolchok
|
open
|
[
"module: build",
"triaged",
"module: core aten"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Per https://en.cppreference.com/w/cpp/language/extending_std:
> It is undefined behavior to add declarations or definitions to namespace std or to any namespace nested within std, with a few exceptions noted below.
The "exceptions noted below" do not seem to include what we're doing in [BFloat16-math.h](https://github.com/pytorch/pytorch/blob/main/c10/util/BFloat16-math.h), and specifically don't include adding overloads of functions that take program-defined types.
This problem is currently "theoretical" in that I am not aware of practical issues resulting from this header at this time.
To fix this, we would need to at least put the functions in BFloat16-math.h into a namespace other than `std` (either `c10` or a new one, like say `c10_math`). Then, we could either:
- have callers do `using std::pow` and all the other cmath functions, and rely on [ADL](https://quuxplusone.github.io/blog/2019/04/26/what-is-adl/) to select the c10/c10_math version for half/BFloat16
- `using` all the std:: functions into our namespace (which IMO argues toward that namespace being a new one like `c10_math`).
### Versions
N/A
cc @malfet @seemethere @manuelcandales @SherlockNoMad @angelayi
| true
|
2,778,764,409
|
[Profiler] Hide Kineto Step Tracker Behind Env Var
|
sraikund16
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: profiler",
"topic: bug fixes"
] | 6
|
CONTRIBUTOR
|
Summary:
To support iteration-based on-demand we have step tracker hooks for both the scheduler and for the optimizer to control Kineto's backend FSM. We already hide the optimizer step tracker behind and ENV_VAR to prevent any extra overhead from the frontend profiler down to the kineto backend, but we don't do any such thing for the profiler step tracker. It also seems to cause errors occasionally in the FSM having both auto-trace and on-demand occurring at the same time.
To remedy this issue, lets put in a patch to guard the step incrementer for the frontend step function. This will bypass all of the on-demand logic which shouldn't occur in auto-trace
Test Plan:
Ran
`buck run mode/dev-nosan kineto/libkineto/fb/integration_tests:pytorch_resnet_integration_test -- --enable_profiling --trace_handler=auto_trace --with_stack` and added prints in on-demand functions (performLoopStep and collectTrace) and saw that neither were called even though they were called on main.
Also got following healthy traces:
Auto-Trace (schedule-based):
https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/dynocli/devvm2185.cco0.facebook.com/rank-0.Jan_09_12_43_37.1122140.pt.trace.json.gz&bucket=gpu_traces
Timing Based On-demand:
https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/dynocli/0/1736456722/localhost/libkineto_activities_1286261.json.gz&bucket=gpu_traces
Iteration Based On-demand:
https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/dynocli/0/1736456889/localhost/libkineto_activities_1304781.json.gz&bucket=gpu_traces
Differential Revision: D67990080
| true
|
2,778,688,291
|
Amazon Linux 2023: Preload cusparseLt.so
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Fixes https://github.com/pytorch/pytorch/issues/144433
Test with some debug statements added:
```
>>> import torch
trying to load libcublas.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cublas/lib/libcublas.so.12']
trying to load libcublas.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cublas/lib/libcublas.so.12
trying to load libcudnn.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn.so.9']
trying to load libcudnn.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn.so.9
trying to load libnvrtc.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cuda_nvrtc/lib/libnvrtc.so.12']
trying to load libnvrtc.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cuda_nvrtc/lib/libnvrtc.so.12
trying to load libcudart.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12']
trying to load libcudart.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12
trying to load libcupti.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cuda_cupti/lib/libcupti.so.12']
trying to load libcupti.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cuda_cupti/lib/libcupti.so.12
trying to load libcufft.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cufft/lib/libcufft.so.11']
trying to load libcufft.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cufft/lib/libcufft.so.11
trying to load libcurand.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/curand/lib/libcurand.so.10']
trying to load libcurand.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/curand/lib/libcurand.so.10
trying to load libnvJitLink.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/nvjitlink/lib/libnvJitLink.so.12']
trying to load libnvJitLink.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/nvjitlink/lib/libnvJitLink.so.12
trying to load libcusparse.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cusparse/lib/libcusparse.so.12']
trying to load libcusparse.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cusparse/lib/libcusparse.so.12
trying to load libcusparseLt.so.*[0-9] from []
trying to load libcusparseLt.so.*[0-9] from /usr/local/lib/python3.9/site-packages/cusparselt/lib/libcusparseLt.so.0
trying to load libcusolver.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cusolver/lib/libcusolver.so.11']
trying to load libcusolver.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cusolver/lib/libcusolver.so.11
trying to load libnccl.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/nccl/lib/libnccl.so.2']
trying to load libnccl.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/nccl/lib/libnccl.so.2
trying to load libnvToolsExt.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/nvtx/lib/libnvToolsExt.so.1']
trying to load libnvToolsExt.so.*[0-9] from /usr/local/lib/python3.9/site-
packages/nvidia/nvtx/lib/libnvToolsExt.so.1
/usr/local/lib64/python3.9/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
>>> exit()
```
| true
|
2,778,660,673
|
patch for block-wise quantization + pt2e
|
cccclai
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: quantization",
"ciflow/inductor",
"release notes: export",
"ci-no-td"
] | 26
|
CONTRIBUTOR
|
Summary: As title, needed for enable qcom block-wise quantization kernel
Test Plan: local test
Differential Revision: D67985303
| true
|
2,778,642,814
|
Restore support for other types of async_compile pools (spawn, fork)
|
masnesral
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144491
Summary: https://github.com/pytorch/pytorch/pull/142001 removed support for process pools other than "subprocess", but some OSS users still find it useful; put it back.
Test Plan: New unit test
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,778,627,067
|
use collective_comm activity for hccl traces
|
fenypatel99
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
MEMBER
|
Summary: Use existing collective_comm (currently used for nccl traces) for hccl traces as well. Only init the nccl profiler when KINETO_HAS_NCCL_PROFILER is defined so as to not init it when the build is for MTIA/HCCL
Test Plan: CIs
Differential Revision: D67285333
| true
|
2,778,624,586
|
Introduce cache clearing APIs for the lazy graph executor
|
rpsilva-aws
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"lazy",
"Merged",
"module: lazy",
"ciflow/trunk",
"release notes: lazy",
"module: inductor",
"module: dynamo"
] | 16
|
CONTRIBUTOR
|
This PR introduces two new methods to the LazyGraphExecutor class:
- ClearComputationCache(): Allows clearing the entire computation cache.
- RemoveFromComputationCache(hash): Enables removal of specific cache entries based on their hash.
The main objective is to expose cache management functionality for debugging cache hits and misses across different computations. For instance:
- Reset the cache state in tests, allowing reuse of the same computation client to evaluate cache logic consistently.
- Selectively remove cache entries to analyze the impact on subsequent computations.
- Improve observability into the cache behavior, aiding in the investigation of cache-related issues or optimizations.
On the XLA lazy graph executor, we want to run a series of tests that modify some parts of the HLO module proto of the computation, and we need a means to ensure that the hash is agnostic to some elements (OpMetadata in the XLA proto data). Hence, it would be easy to parameterize the test, clear the cache and validate that the resulting hash is the same between runs. Otherwise, we'd need to hardcode the resulting serialized hash.
Simultaneously, **another motivation**, is that users could also clear some computation hashes for an added flexibility in their applications, by introducing their own custom strategies for maintaining the cache (without relying on the default LRU).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,778,615,412
|
Back out "[AoTI Minifier] UX Improvement"
|
yushangdi
|
closed
|
[
"fb-exported",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Summary:
Original commit changeset: 2966389eb680
Original Phabricator Diff: D67299312
Test Plan: -
Differential Revision: D67985187
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,778,609,529
|
retracing in strict doesn't like dataclass registration
|
avikchaudhuri
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144487
Retracing in strict doesn't seem to like dataclass registration. Just refactoring some tests to make this explicit (whereas other export testing variants work fine).
Differential Revision: [D67985149](https://our.internmc.facebook.com/intern/diff/D67985149/)
| true
|
2,778,608,076
|
Simplify vec128 bfloat16/half fmadds
|
swolchok
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144486
I was being silly when I wrote these; it doesn't make sense to do four conversions and two FMAs when we could do a multiply and an add.
Differential Revision: [D67985074](https://our.internmc.facebook.com/intern/diff/D67985074/)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,778,585,319
|
[dynamo] Use polyfill to implement comparison operators
|
anijain2305
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ci-no-td"
] | 20
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144485
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,778,579,435
|
Cleanup gpt_fast benchmark
|
nmacchioni
|
closed
|
[
"ciflow/trunk",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Prefer to use `benchmarker.benchmark` instead of explicitly selecting `benchmarker.benchmark_cpu` or `benchmarker.benchmark_gpu`
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144484
| true
|
2,778,578,920
|
Stop ignoring mypy errors in torch/testing/_internal/common_utils.py
|
aorenste
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ci-no-td"
] | 17
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #141659
* __->__ #144483
| true
|
2,778,553,195
|
Link to transformer tutorial in transformer docs
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144425
<img width="1045" alt="Screenshot 2025-01-08 at 4 50 20 PM" src="https://github.com/user-attachments/assets/05adfecb-8a23-4c48-9a2c-50c5b3f886b0" />
| true
|
2,778,532,265
|
DISABLED MULTIPLE ignore this, this is a test
|
clee2000
|
closed
|
[] | 1
|
CONTRIBUTOR
|
This is a test, feel free to ignore this
disable the following tests:
```
test_abcd123 (TestSuiteABCDEF): mac
test_abcd12asdf3 (TestSuiteABCDEF)
test_abcd1asdf23 (TestSuiteABCDEF): linux
```
| true
|
2,778,501,349
|
UNSTABLE pull / linux-jammy-py3-clang12-executorch / test (executorch)
|
huydhn
|
closed
|
[
"module: ci",
"triaged",
"unstable"
] | 13
|
CONTRIBUTOR
|
The test starts failing flakily possibly after https://github.com/pytorch/pytorch/pull/143787 lands and needs to be updated
cc @seemethere @malfet @pytorch/pytorch-dev-infra @mergennachin
| true
|
2,778,480,860
|
Failures because of deprecated version of `actions/download-artifact: v3`
|
kit1980
|
closed
|
[
"ci: sev"
] | 4
|
CONTRIBUTOR
|
## Current Status
Mitigated
## Error looks like
> Error: This request has been automatically failed because it uses a deprecated version of `actions/download-artifact: v3`. Learn more: https://github.blog/changelog/2024-04-16-deprecation-notice-v3-of-the-artifact-actions/. This request has been automatically failed because it uses a deprecated version of `actions/upload-artifact: v3`. Learn more: https://github.blog/changelog/2024-04-16-deprecation-notice-v3-of-the-artifact-actions/
https://github.com/pytorch/pytorch/actions/runs/12694840556/job/35385520678
## Incident timeline (all times pacific)
*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*
## User impact
*How does this affect users of PyTorch CI?*
## Root cause
*What was the root cause of this issue?*
## Mitigation
*How did we mitigate the issue?*
## Prevention/followups
*How do we prevent issues like this in the future?*
| true
|
2,778,376,726
|
[dynamo] Support using UserDefinedFunction as argument (as_proxy).
|
IvanKobzarev
|
open
|
[
"feature",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
At the moment using UserDefainedFunction as_proxy() fails as unimplemented.
The original reason why it is needed - we want to dynamo trace through Subclasses constructor that receives python Callable as an argument (and it is not used inside the constructor).
Dynamo tries to inline it and fails on UserDefinedFunction. as_proxy()
The straightforward approach will be:
1. trace through UserDefinedFn with speculate_subgraph and register subgraph as attribute of output_graph
At the moment of using UserDefinedFn, arguments for this function are not specified.
One of the ideas - could we put some "stub", empty subgraph at the moment of creating an argument and replace it with real subgraph only when this UserDefinedFunciton is called?
The original testcase:
```
def test_unwrap_subclass_parameters_with_unused_callable_arg_in_ctor(self):
def fn(x):
return x
_test_fn = fn
class SC(WrapperSubclass):
@staticmethod
def __new__(cls, a, fn, outer_size=None, outer_stride=None):
return WrapperSubclass.__new__(cls, a, outer_size, outer_stride)
def __init__(self, a, fn, outer_size=None, outer_stride=None):
self.a = a
self.fn = fn
def __tensor_flatten__(self):
return ["a"], [self.fn]
@staticmethod
def __tensor_unflatten__(inner_tensors, meta, outer_size, outer_stride):
a = inner_tensors["a"]
fn = meta[0]
return SC(a, fn, outer_size, outer_stride)
@classmethod
def __torch_dispatch__(cls, func, types, args, kwargs):
if kwargs is None:
kwargs = {}
args_a = pytree.tree_map_only(cls, lambda x: x.a, args)
kwargs_a = pytree.tree_map_only(cls, lambda x: x.a, kwargs)
out_a = func(*args_a, **kwargs_a)
out_a_flat, spec = pytree.tree_flatten(out_a)
out_flat = [
cls(o_a, _test_fn) if isinstance(o_a, torch.Tensor) else o_a
for o_a in out_a_flat
]
out = pytree.tree_unflatten(out_flat, spec)
from torch._higher_order_ops.cond import cond_op
if func is cond_op:
return out
else:
return return_and_correct_aliasing(func, args, kwargs, out)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.p1 = torch.nn.Parameter(torch.ones(3, 4))
self.p2 = torch.nn.Parameter(SC(torch.ones(3, 4), _test_fn))
def forward(self, x):
return x + 2 * self.p1 + self.p2
m = M()
from torch._functorch._aot_autograd.subclass_parametrization import (
unwrap_tensor_subclass_parameters,
)
unwrap_tensor_subclass_parameters(m)
x = torch.randn(3, 4)
comp_fn = torch.compile(m, backend="aot_eager", fullgraph=True)
out = comp_fn(x)
```
Error:
```
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1685, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/symbolic_convert.py", line 921, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/variables/user_defined.py", line 600, in call_function
*proxy_args_kwargs(args, kwargs, tx=tx),
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/utils.py", line 976, in proxy_args_kwargs
unimplemented(
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/exc.py", line 355, in unimplemented
raise Unsupported(msg, case_name=case_name) from from_exc
torch._dynamo.exc.Unsupported: call_function args: TensorVariable() UserFunctionVariable() ConstantVariable(NoneType: None) ConstantVariable(NoneType: None)
from user code:
File "/data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py", line 6421, in forward
return x + 2 * self.p1 + self.p2
File "/data/users/ivankobzarev/a/pytorch/torch/nn/utils/parametrize.py", line 407, in get_parametrized
return parametrization()
File "/data/users/ivankobzarev/a/pytorch/torch/nn/utils/parametrize.py", line 303, in forward
x = self[0](*originals)
File "/data/users/ivankobzarev/a/pytorch/torch/_functorch/_aot_autograd/subclass_parametrization.py", line 16, in forward
rebuilt = tp.__tensor_unflatten__(d, meta, None, None) # type: ignore[attr-defined]
File "/data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py", line 6391, in __tensor_unflatten__
return SC(a, fn, outer_size, outer_stride)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,778,337,442
|
Amazon Linux 2023: Preload cusparseLt.so
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/144433
Test with some debug statements added:
```
>>> import torch
trying to load libcublas.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cublas/lib/libcublas.so.12']
trying to load libcublas.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cublas/lib/libcublas.so.12
trying to load libcudnn.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn.so.9']
trying to load libcudnn.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn.so.9
trying to load libnvrtc.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cuda_nvrtc/lib/libnvrtc.so.12']
trying to load libnvrtc.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cuda_nvrtc/lib/libnvrtc.so.12
trying to load libcudart.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12']
trying to load libcudart.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12
trying to load libcupti.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cuda_cupti/lib/libcupti.so.12']
trying to load libcupti.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cuda_cupti/lib/libcupti.so.12
trying to load libcufft.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cufft/lib/libcufft.so.11']
trying to load libcufft.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cufft/lib/libcufft.so.11
trying to load libcurand.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/curand/lib/libcurand.so.10']
trying to load libcurand.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/curand/lib/libcurand.so.10
trying to load libnvJitLink.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/nvjitlink/lib/libnvJitLink.so.12']
trying to load libnvJitLink.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/nvjitlink/lib/libnvJitLink.so.12
trying to load libcusparse.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cusparse/lib/libcusparse.so.12']
trying to load libcusparse.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cusparse/lib/libcusparse.so.12
trying to load libcusparseLt.so.*[0-9] from []
trying to load libcusparseLt.so.*[0-9] from /usr/local/lib/python3.9/site-packages/cusparselt/lib/libcusparseLt.so.0
trying to load libcusolver.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/cusolver/lib/libcusolver.so.11']
trying to load libcusolver.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/cusolver/lib/libcusolver.so.11
trying to load libnccl.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/nccl/lib/libnccl.so.2']
trying to load libnccl.so.*[0-9] from /usr/local/lib/python3.9/site-packages/nvidia/nccl/lib/libnccl.so.2
trying to load libnvToolsExt.so.*[0-9] from ['/usr/local/lib/python3.9/site-packages/nvidia/nvtx/lib/libnvToolsExt.so.1']
trying to load libnvToolsExt.so.*[0-9] from /usr/local/lib/python3.9/site-
packages/nvidia/nvtx/lib/libnvToolsExt.so.1
/usr/local/lib64/python3.9/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
>>> exit()
```
| true
|
2,778,300,277
|
[ROCm] add support for gfx12
|
jeffdaily
|
closed
|
[
"module: rocm",
"open source",
"release notes: linalg_frontend",
"ciflow/rocm"
] | 2
|
COLLABORATOR
|
- add gfx1200 gfx1201 to allowed hipblaslt lists
- enable hipify of CUDA_R_8F_E4M3 and CUDA_R_8F_E5M2
- enable types of Float8_e4m3fn Float8_e5m2
- conditionalize tests to create appropriate F8 types for given gfx target
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,778,298,885
|
Partially Revert D67299312: [AoTI Minifier] UX Improvement" for one test failure
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary:
This diff partially reverts D67299312
D67299312: [AoTI Minifier] UX Improvement by yushangdi causes the following test failure:
Differential Revision: D67963019
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,778,250,585
|
Fix block pointer test module for triton CPU and add to CI
|
kundaMwiza
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
- Fix for BlockPointerTestBase._discontiguous_tensor. It defaults to constructing CUDA tensors, causing a failure if CUDA is not available.
- Add test module to CI to prevent errors like the above from occurring.
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,778,216,066
|
[20/N] Fix extra warnings brought by clang-tidy-17
|
cyyever
|
closed
|
[
"oncall: distributed",
"module: cpu",
"triaged",
"open source",
"release notes: cpp"
] | 7
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,778,149,008
|
[BE]: Remove redundant contiguous copy in torch/_decomp/decompositions
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Removes a redundant extra copy by calling contiguous. Instead, just add a memory_format flag to the dtype cast.
| true
|
2,778,102,291
|
Add max kwarg to torch._check with alternate size oblivious semantics
|
ezyang
|
closed
|
[
"Merged",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144745
* #144743
* __->__ #144471
Fixes https://github.com/pytorch/pytorch/issues/120288 for the static bound case
I had been tying myself in knots in the original issue about the fact that we can't really do symbolic bounds like u0 < s0. But then I realized, "Wait, but the static bounds are easy!" So this makes it so you can also exclude a specific upper bound when doing size oblivious tests, which is enough to solve https://github.com/pytorch/pytorch/issues/123592#issuecomment-2574556708
It's written very dirtily, maybe there's some cleanup. Bikeshed on the public API name also welcome.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,778,092,242
|
Partitioner's auto-AC misbehaves with mixed dtypes
|
lw
|
open
|
[
"module: activation checkpointing",
"feature",
"module: autograd",
"triaged",
"oncall: pt2"
] | 4
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
When using the partitioner to do some automatic selective activation checkpointing, the default mode consists in estimating the runtime cost of the operators by computing the number of FLOPs they consume. Unfortunately, this is a bad proxy metric in cases where different operators are executed with different dtypes because, in a sense, not all FLOPs are equal.
Concretely, this occurs when using fp8 matmuls on H100 GPUs, because (in most/all current recipes) only the matmuls are converted to fp8, whereas the self-attention remains in bf16. Moreover, in some cases only _some_ matmuls get converted to fp8 (e.g., some layers, or some specific weights within layers).
If the partitioner just compares FLOPs without adjusting them by the time it takes to execute a FLOP in a given dtype this might lead to a suboptimal solution to the AC problem.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu
| true
|
2,778,055,487
|
[BE]: Replace clone detach with detach clone to be more efficient
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"release notes: vulkan"
] | 3
|
COLLABORATOR
|
Follow up to #144270 and fix some vulkan code
| true
|
2,778,039,277
|
[BE]: Follow detach().clone() pattern for SGD
|
Skylion007
|
open
|
[
"triaged",
"open source",
"better-engineering",
"Stale",
"ciflow/trunk",
"release notes: optim"
] | 15
|
COLLABORATOR
|
Clone() copies the gradients too, but we immediately detach them. Detach returns a view of the tensor without it's gradients, and the copies only that subset. Related to #144270
| true
|
2,778,010,521
|
[BE]: Remove redundant contiguous copy in flex attention
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Removes a redundant potential copy, instead use memory_format kwarg to fuse both operations into a single copy.
| true
|
2,777,835,430
|
DISABLED test_qs8_conv1d_batchnorm_seq (__main__.TestConv1d)
|
albanD
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: linux, executorch
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22backends%2Fxnnpack%2Ftest%2Fops%2Ftest_conv1d.py%3A%3ATestConv1d%3A%3Atest_qs8_conv1d_batchnorm_seq%22%5D)).
This is most likely due to hardcoded random values following https://github.com/pytorch/pytorch/pull/143787
We should investigate and fix the test to not rely on specific random samples.
cc @clee2000 @wdvr @mergennachin
| true
|
2,777,771,359
|
Support Swiglu for Module and functional
|
fmo-mt
|
open
|
[
"triaged",
"open source",
"release notes: nn",
"release notes: cpp",
"module: inductor",
"module: dynamo"
] | 28
|
CONTRIBUTOR
|
Fixes #128712
I see #138790 has not been updated for a while, and it just implement a torch..nn.Module which can not be fully used for SwiGLU, so I implemented this activation function.
Here're some features that we can discuss:
- it's better to use `torch.swiglu` or `torch.swish_glu`? I chose the latter.
- Do we need to implement a kernel to receive better performance? I think so but it needs more effort so I leave it to other contributor.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,777,753,083
|
[ONNX] Trying to export QAT model with torch.onnx.export, running into Problems that workarounds cant fix. Is ONNX export planned for QAT?
|
Beoje
|
open
|
[
"module: onnx",
"triaged"
] | 6
|
NONE
|
### 🐛 Describe the bug
The bug occurs when trying to export a model fully converted model to onnx. I have tried the workarounds suggested in (https://pytorch.org/docs/main/quantization.html#frequently-asked-questions) but even after trying all of them and debugging further I end up at various torch.onnx.errors.SymbolicValueError, which I cannot really explain due to the nature of these workarounds feeling like patchwork.
Imports:
```
import torch.onnx
import torch
import torch.nn as nn
from torchinfo import summary
import numpy as np
import onnx
from torch.ao.quantization import fuse_modules, QuantStub, DeQuantStub
from torch.ao.quantization import (
get_default_qat_qconfig,
prepare_qat,
convert
)
```
My Code:
```
class Conv1d(nn.Conv1d):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.causal_padding = self.dilation[0] * (self.kernel_size[0] - 1)
self.conv1d = nn.Conv1d(
in_channels=self.in_channels,
out_channels=self.out_channels,
kernel_size=self.kernel_size[0],
stride=self.stride[0],
dilation=self.dilation[0],
padding= 0,
padding_mode='zeros'
)
def forward(self, x):
x = nn.functional.pad(x, (self.causal_padding//2, self.causal_padding-self.causal_padding//2), 'constant')
return self.conv1d(x)
class ResidualUnit(nn.Module):
def __init__(self, in_channels, out_channels, dilation):
super().__init__()
self.dilation = dilation
self.layers = nn.Sequential(
Conv1d(in_channels=in_channels, out_channels=out_channels,
kernel_size=5, dilation=dilation),
nn.LeakyReLU(),
nn.Conv1d(in_channels=in_channels, out_channels=out_channels,
kernel_size=1),
)
def forward(self, x):
return x + self.layers(x)
class EncoderBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super().__init__()
self.layers = nn.Sequential(
ResidualUnit(in_channels=int(in_channels),
out_channels=int(in_channels), dilation=3),
nn.LeakyReLU(),
Conv1d(in_channels=int(in_channels), out_channels=int(out_channels),
kernel_size=2*stride, stride=stride),
)
def forward(self, x):
return self.layers(x)
class Encoder(nn.Module):
def __init__(self, C=36, D=9):
super().__init__()
self.layers = nn.Sequential(
Conv1d(in_channels=36, out_channels=C, kernel_size=7),
nn.LeakyReLU(),
EncoderBlock(in_channels=C ,out_channels=24, stride=1),
nn.LeakyReLU(),
EncoderBlock(in_channels=24 ,out_channels=20, stride=2),
nn.LeakyReLU(),
EncoderBlock(in_channels=20 ,out_channels=16, stride=2),
nn.LeakyReLU(),
EncoderBlock(in_channels=16 ,out_channels=12, stride=2),
nn.LeakyReLU(),
Conv1d(in_channels=12, out_channels=D, kernel_size=1),
)
def forward(self, x):
x = self.layers(x)
return x
if __name__ == "__main__":
model = Encoder()
model.train()
model.qconfig = get_default_qat_qconfig("x86")
qat_model = prepare_qat(model)
qat_model_done = convert(qat_model)
dummy_input = torch.rand(1, 36, 800)
onnx_file_path = "./qat_export_test.onnx"
onnx_model = torch.onnx.export(
qat_model_done,
dummy_input,
onnx_file_path,
export_params=True
)
```
Error:
```
Traceback (most recent call last):
File "/media/data/xxx/xxx/prev_code/xxx/data_compression/data_compression/examples/idk.py", line 115, in <module>
onnx_model = torch.onnx.export(
^^^^^^^^^^^^^^^^^^
File /media/data/xxx/xxx/prev_code/xxx//prev_code/moritz_martinolli_397/venv/lib/python3.11/site-packages/torch/onnx/__init__.py", line 375, in export
export(
File "/media/data/xxx/xxx/prev_code/xxx/prev_code/moritz_martinolli_397/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 502, in export
_export(
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 639, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1836, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/symbolic_opset10.py", line 747, in dequantize
return symbolic_helper.dequantize_helper(g, input)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/symbolic_helper.py", line 1525, in dequantize_helper
unpacked_qtensors = _unpack_quantized_tensor(qtensor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/symbolic_helper.py", line 189, in _unpack_quantized_tensor
raise errors.SymbolicValueError(
torch.onnx.errors.SymbolicValueError: ONNX symbolic expected the output of `%x.1 : Float(1, 36, 800, strides=[28800, 800, 1], requires_grad=0, device=cpu) = prim::Param()
` to be a quantized tensor. Is this likely due to missing support for quantized `prim::Param`. Please create an issue on https://github.com/pytorch/pytorch/issues [Caused by the value 'x.1 defined in (%x.1 : Float(1, 36, 800, strides=[28800, 800, 1], requires_grad=0, device=cpu) = prim::Param()
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'prim::Param'.]
Inputs:
Empty
Outputs:
#0: x.1 defined in (%x.1 : Float(1, 36, 800, strides=[28800, 800, 1], requires_grad=0, device=cpu) = prim::Param()
) (type 'Tensor')
```
This error arrose while I was doing the whole QuantStub and DeQuantstub workarounds suggested in the FAQ of QAT.
If no one has time to actually recreate this and debug it, please just tell me if full QAT to ONNX support is planned or not (I've seen contributors saying it is not planned, I've seen posts where there is said that it should be in torch 2.0, so maybe just a clear answer would be good), so I know if I can just disregard this until it eventually becomes an option.
### Versions
```
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.7.101
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.2.10.91
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.0.1
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.4.91
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu11==2.14.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.7.91
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241217
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
```
| true
|
2,777,640,254
|
improve WOQ first token performance on CPU
|
yuchengliu1
|
open
|
[
"module: cpu",
"module: mkldnn",
"open source",
"release notes: releng",
"module: inductor"
] | 1
|
NONE
|
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,777,458,487
|
[Inductor][GPU] Input is padded with incorrect value when executing `torch.nn.functional.pad` on gpu
|
maybeLee
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When running `torch.nn.functional.pad` using `torch.compile` with GPU, this operator will not use the `value` parameter when padding a one-element tensor.
Instead, it pads the tensor with the tensor's own value.
Please run the following code to reproduce this issue:
```python
import torch
from torch import nn
torch.manual_seed(100)
class Temp(nn.Module):
def __init__(self):
super(Temp, self).__init__()
def forward(self, input, pad, mode, value):
return torch.nn.functional.pad(input, pad, mode, value)
model = Temp()
cmodel = torch.compile(model)
input = torch.randn(1,1,1)
print(f"The input is: ", input)
input = input.to('cuda')
pad = [10, 10]
mode = 'constant'
value = 0
print(f"Eager result: ", model(input, pad, mode, value))
print(f"Compiled result: ", cmodel(input, pad, mode, value))
```
Output:
```
The input is: tensor([[[0.3607]]])
Eager result: tensor([[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.3607, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]], device='cuda:0')
Compiled result: tensor([[[0.3607, 0.3607, 0.3607, 0.3607, 0.3607, 0.3607, 0.3607, 0.3607,
0.3607, 0.3607, 0.3607, 0.3607, 0.3607, 0.3607, 0.3607, 0.3607,
0.3607, 0.3607, 0.3607, 0.3607, 0.3607]]], device='cuda:0')
```
This issue seems worth fixing since the API `torch.nn.functional.pad` **silently pads the given tensor with a wrong padding value**, without any warning messages.
Moreover, another API (`nn.ConstantPad2d`) also has this issue.
Please note that this issue only happens when running on cuda.
### Versions
PyTorch version: 2.7.0.dev20250109+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 74%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250109+cu126
[pip3] torchaudio==2.6.0.dev20250106+cu124
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250109+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250106+cu124 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng
| true
|
2,777,028,709
|
ThroughputBenchmark incorrectly change autocast dtype on CPU
|
shiyang-weng
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
After applying this patch https://github.com/pytorch/pytorch/commit/1c2593f035d852a5743b8634eccf9476d775f1ad#diff-176dae2fa097bbdb3f04182d59109027bbe143a63bc5ca045dcaae4b0c4798a7R461,
ThroughputBenchmark not work.
And the previous commit run it successfully.
```python
import torch
class CatDense(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear = torch.nn.Linear(128, 128)
def forward(self, x) -> torch.Tensor:
y = self.linear(x)
return y
class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.linear = torch.nn.Linear(13, 128)
self.catdense = CatDense()
def forward(self, dense):
out = self.linear(dense)
out = self.catdense(out)
return out
dtype = torch.float32 # this issue also exists with float16
bs = 256
from torch._inductor import config as inductor_config
from torch._dynamo import config
config.error_on_recompile = True
inductor_config.cpp_wrapper = True
inductor_config.cpp.enable_kernel_profile = True
inductor_config.freezing = True
model = Model()
dense = torch.zeros(bs, 13)
model(dense)
autocast = dtype != torch.float32
with torch.no_grad(), torch.cpu.amp.autocast(enabled=autocast, dtype=dtype):
print('[Info] Running torch.compile() with default backend')
model(dense)
model = torch.compile(model)
model(dense)
model(dense)
from torch.utils import ThroughputBenchmark
import contextlib
ctx = contextlib.suppress()
if dtype == 'fp16':
ctx = torch.cpu.amp.autocast(enabled=autocast, dtype=dtype)
with torch.no_grad(), ctx:
bench = ThroughputBenchmark(model)
bench.add_input(dense)
stats = bench.benchmark(
num_calling_threads=1,
num_warmup_iters=200,
num_iters=300,
)
```
There are following error.
```
terminate called after throwing an instance of 'pybind11::error_already_set'
what(): RecompileError: Recompiling function forward in pytorch/torchrec_dlrm/inference/cpu/debug_comp
ile.py:18
triggered by the following guard failure(s):
- 0/0: GLOBAL_STATE changed: autocast 123
At:
pytorch/torch/_dynamo/guards.py(2824): get_and_maybe_log_recompilation_reason
pytorch/torch/_dynamo/convert_frame.py(893): _compile
pytorch/torch/_dynamo/convert_frame.py(548): __call__
pytorch/torch/_dynamo/convert_frame.py(1227): __call__
pytorch/torch/_dynamo/convert_frame.py(1387): __call__
pytorch/torch/nn/modules/module.py(1750): _call_impl
pytorch/torch/nn/modules/module.py(1739): _wrapped_call_impl
pytorch/torchrec_dlrm/inference/cpu/pytorch/torch/_dynamo/eval_frame.py(588): _fn
pytorch/torchrec_dlrm/inference/cpu/pytorch/torch/nn/modules/module.py(1750): _call_impl
pytorch/torch/nn/modules/module.py(1739): _wrapped_call_impl
Aborted (core dumped)
```
I print AutocastState().dtype[0] on
pytorch/torch/csrc/dynamo/guards.cpp:GlobalStateGuard::check
by
```
# git diff torch/csrc/dynamo/guards.cpp
diff --git a/torch/csrc/dynamo/guards.cpp b/torch/csrc/dynamo/guards.cpp
index 0f016d3fc9..54eaecca2f 100644
--- a/torch/csrc/dynamo/guards.cpp
+++ b/torch/csrc/dynamo/guards.cpp
@@ -540,6 +540,13 @@ struct AutocastState {
bool operator==(const AutocastState& o) const {
for (size_t i = 0; i < DEVICES.size(); i++) {
if (enabled[i] != o.enabled[i] || dtype[i] != o.dtype[i]) {
+ if (enabled[i] != o.enabled[i]) {
+ std::cout << "not enable\n";
+ }
+ if (dtype[i] != o.dtype[i]) {
+ std::cout << "dtype mismatch\n";
+ std::cout << dtype[i] << " " << o.dtype[i] << std::endl;
+ }
return false;
}
}
@@ -577,6 +584,7 @@ struct GlobalStateGuard {
inline bool check() const {
auto& ctx = at::globalContext();
+ std::cout << "AutocastState().dtype[0]: " << AutocastState().dtype[0] << std::endl;
return (_grad_mode == at::GradMode::is_enabled() &&
_autocast_state == AutocastState() &&
_torch_function == torch::torch_function_enabled() &&
```
Find bench.benchmark change AutocastState().dtype[0] to BF16.
But data type should be fp32. FP16 also has this issue
### Versions
```
# cd pytorch/
# git log -1
commit 1c2593f035d852a5743b8634eccf9476d775f1ad (HEAD)
# cd vision/
# git log -1
commit d3beb52a00e16c71e821e192bcc592d614a490c0 (HEAD -> main, origin/main, origin/HEAD)
# python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+git1c2593f
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: CentOS Linux 8 (x86_64)
GCC version: (conda-forge gcc 12.3.0-13) 12.3.0
Clang version: Could not collect
CMake version: version 3.29.4
Libc version: glibc-2.28
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-gnr.bkc.6.2.7.5.31.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 480
On-line CPU(s) list: 0-479
Thread(s) per core: 2
Core(s) per socket: 120
Socket(s): 2
NUMA node(s): 6
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
CPU family: 6
Model: 173
Model name: Intel(R) Xeon(R) 6979P
BIOS Model name: Intel(R) Xeon(R) 6979P
Stepping: 1
CPU MHz: 750.854
CPU max MHz: 3900.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 64K
L2 cache: 2048K
L3 cache: 516096K
NUMA node0 CPU(s): 0-39,240-279
NUMA node1 CPU(s): 40-79,280-319
NUMA node2 CPU(s): 80-119,320-359
NUMA node3 CPU(s): 120-159,360-399
NUMA node4 CPU(s): 160-199,400-439
NUMA node5 CPU(s): 200-239,440-479
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_
freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_de
adline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 s
sbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invp
cid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec
xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk cat_l3_io cqm_occup_llc_io cqm_mbm_io avx
_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2
gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm m
d_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.1
[pip3] onnx==1.16.2
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0a0+git1c2593f
[pip3] torchaudio==2.6.0.dev20250105+cpu
[pip3] torchmetrics==0.11.0
[pip3] torchrec==0.3.2
[pip3] torchsnapshot==0.1.0
[pip3] torchvision==0.22.0a0+d3beb52
[conda] mkl 2024.2.0 pypi_0 pypi
[conda] mkl-include 2024.2.0 pypi_0 pypi
[conda] numpy 2.0.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0a0+git1c2593f dev_0 <develop>
[conda] torchaudio 2.6.0.dev20250105+cpu pypi_0 pypi
[conda] torchmetrics 0.11.0 pypi_0 pypi
[conda] torchrec 0.3.2 pypi_0 pypi
[conda] torchsnapshot 0.1.0 pypi_0 pypi
[conda] torchvision 0.22.0a0+d3beb52 dev_0 <develop>
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
| true
|
2,777,028,677
|
[Inductor] Fix starvation issue when threads attempt to acquire write…
|
NSBlink
|
closed
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing",
"module: inductor"
] | 7
|
NONE
|
… lock on `model_exec_mutex_`
Fixes #144459
## explain
A write-priority strategy is implemented. When a thread attempts to acquire the `model_exec_mutex_` write lock, other threads attempting to acquire the `model_exec_mutex_` read lock enter a waiting state until no thread is requesting the `model_exec_mutex_` write lock.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,777,027,183
|
In AOTI Runtime, acquiring a write lock on model_exec_mutex_ may cause starvation.
|
NSBlink
|
open
|
[
"triaged",
"oncall: export",
"module: aotinductor"
] | 2
|
NONE
|
### 🐛 Describe the bug
**Describe**
https://github.com/pytorch/pytorch/blob/6f28e466f3d7b396dfe9cea87f4377be77fa7ddf/torch/csrc/inductor/aoti_runtime/model_container.h#L87
In `AOTInductorModelContainer::run`, if constant_folded_ is false, it will attempt to acquire a write lock on `model_exec_mutex_`. However, if multiple threads enter here simultaneously, some threads will be blocked. Subsequently, other threads may continue calling `AOTInductorModelContainer::run` while holding the read lock on `model_exec_mutex_`, and those threads attempting to acquire the write lock on `model_exec_mutex_` may remain blocked.
**reproduction**
Use any model, such as the one in the [tutorial](https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html), to export model.so, Complex models may be more likely to reproduce issues.
When inference is executed in multiple threads, there is a probability that some threads will be blocked.
```
#include <iostream>
#include <vector>
#include <thread>
#include <torch/torch.h>
#include <torch/csrc/inductor/aoti_runner/model_container_runner_cpu.h>
void test_inference(torch::inductor::AOTIModelContainerRunnerCpu& runner) {
std::vector<torch::Tensor> inputs = {torch::randn({8, 10}, at::kCPU)};
for (int i = 0; i < 1000000; i++) {
std::vector<torch::Tensor> outputs = runner.run(inputs);
}
}
int main() {
c10::InferenceMode mode;
torch::inductor::AOTIModelContainerRunnerCpu runner("./model.so", 512);
torch::set_num_interop_threads(1);
torch::set_num_threads(1);
std::vector<std::thread> threads;
for (int i = 0; i < 16; i++) {
threads.push_back(std::thread(test_inference, std::ref(runner)));
}
for (int i = 0; i < 16; i++) {
threads[i].join();
}
return 0;
}
```

### Versions
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55
NUMA node1 CPU(s): 56-111
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.5.1+cpu
[pip3] torchaudio==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,776,984,082
|
Support negative values for fill with uint tensors
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144458
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Fixes https://github.com/pytorch/pytorch/issues/144188
| true
|
2,776,937,934
|
[Inductor UT] Add expected failure for newly added case on XPU, align CUDA.
|
etaf
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144452
* #144456
* __->__ #144457
The newly added case `test_randint_distribution` from #143787 was set expected failure for CUDA but not for XPU.
We add the expected failure here because if fails with the same reason as CUDA.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,776,891,038
|
[Inductor UT] Generalize newly introduced device-bias hard code in
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144452
* __->__ #144456
* #144457
Re-land #143975. Fix "cuda" hard code in test_pattern_matcher.py introduced by #139321
Fix #143974
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,776,765,876
|
[Profiler]Close CUPTI by default after pytorch profiling ends
|
leveretconey
|
open
|
[
"oncall: profiler"
] | 4
|
NONE
|
### 🚀 The feature, motivation and pitch
Recently, I observed that the torch profiler continues to affect program performance even after profiling has completed. Upon investigation, I found that this issue occurs because Kineto does not close CUPTI (i.e., it does not call `cuptiFinalize`) by default when profiling ends, unless the environment variable `TEARDOWN_CUPTI` is explicitly set to `"1"`.
I believe most users do not know this behavior. Is it possible to change the behavior so that CUPTI is closed by default at the end of profiling if the env is not set? Are there risks associated with this change?
I noticed the following comment in the torch code:
```python
if self.config.get("triton.cudagraphs", False):
os.environ["DISABLE_CUPTI_LAZY_REINIT"] = "1"
# FIXME: CUDA Graph does not work well with CUPTI teardown.
# 1) crashes on 1st lazy CUPTI re-init after teardown (CUDA 11)
# 2) crashes on 2nd non-lazy CUPTI re-init after teardown (CUDA 12)
# Workaround: turn off CUPTI teardown when using CUDA Graphs.
os.environ["TEARDOWN_CUPTI"] = "0"
```
Is the current behavior of not closing CUPTI primarily due to concerns about crashes caused by CUDA Graphs?
Thank you for your attention to this matter.
### Alternatives
_No response_
### Additional context
_No response_
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
| true
|
2,776,755,088
|
Apply clang-format for ATen/core headers
|
zeshengzong
|
closed
|
[
"open source",
"Stale",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Code change via add path config in .lintrunner.toml file and running
```bash
lintrunner -a --take CLANGFORMAT --all-files
```
| true
|
2,776,737,651
|
Enable XPU tests
|
zhangxiaoli73
|
closed
|
[
"oncall: distributed",
"release notes: distributed (fsdp)"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,776,716,306
|
[WIP][Inductor XPU] Use channel last format for XPU conv in Inductor.
|
etaf
|
closed
|
[
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144452
* #144456
* #144457
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,776,712,445
|
Remove the `_stacklevel` arg from `log_softmax`, `softmax` and `softmin`
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"Stale",
"release notes: AO frontend",
"suppress-bc-linter"
] | 3
|
CONTRIBUTOR
|
Fixes #83163
Remove `_stacklevel` parameter
**Test Result**
**Before**



**After**



| true
|
2,776,690,400
|
[XPU] Fix build error
|
etaf
|
closed
|
[
"module: cpu",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144452
* #144456
* #144457
* __->__ #144450
Caused by https://github.com/pytorch/pytorch/pull/144014
Fixes https://github.com/pytorch/pytorch/issues/144447
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,776,678,361
|
[mps/inductor] Add support for fmod().
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
MEMBER
|
397 -> 395 tests failing. `static_cast<>` is because there are several overloads of `fmod()` that's otherwise ambiguous. I wonder if we should take in account NaN propagation (maybe it's not tested).
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,776,674,336
|
[19/N] Fix extra warnings brought by clang-tidy-17
|
cyyever
|
closed
|
[
"oncall: distributed",
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
Apply more clang-tidy fixes. There was a bug introduced by #144014 due to incorrect namespace concatenation which is reverted here.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @xmfan @yf225
| true
|
2,776,661,041
|
[Break XPU] PyTorch XPU build failure caused by incorrect change in xpu/Blas.cpp from #144014.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
As title, the change from #144014 , in [aten/src/ATen/native/mkldnn/xpu/Blas.cpp](https://github.com/pytorch/pytorch/pull/144014/files#diff-9ae74b4a8990350760237cc09e715cc25a333f1d0655bd13cddb71c62cea2a39) break the origin namespace declaration and caused PyTorch XPU build fail:
```
home/xinanlin/xinanlin/pytorch/aten/src/ATen/TensorMeta.h:58:36: error: ‘structured_addmm_out_xpu’ has not been declared
58 | #define TORCH_IMPL_FUNC(name) void structured_##name::impl
| ^~~~~~~~~~~
/home/xinanlin/xinanlin/pytorch/aten/src/ATen/native/mkldnn/xpu/Blas.cpp:459:1: note: in expansion of macro ‘TORCH_IMPL_FUNC’
459 | TORCH_IMPL_FUNC(addmm_out_xpu)
| ^~~~~~~~~~~~~~~
/home/xinanlin/xinanlin/pytorch/aten/src/ATen/native/mkldnn/xpu/Blas.cpp:460:8: error: ‘Tensor’ does not name a type
460 | (const Tensor& self,
| ^~~~~~
/home/xinanlin/xinanlin/pytorch/aten/src/ATen/native/mkldnn/xpu/Blas.cpp:461:8: error: ‘Tensor’ does not name a type
461 | const Tensor& mat1,
| ^~~~~~
/home/xinanlin/xinanlin/pytorch/aten/src/ATen/native/mkldnn/xpu/Blas.cpp:462:8: error: ‘Tensor’ does not name a type
462 | const Tensor& mat2,
| ^~~~~~
/home/xinanlin/xinanlin/pytorch/aten/src/ATen/native/mkldnn/xpu/Blas.cpp:463:8: error: ‘Scalar’ does not name a type
463 | const Scalar& beta,
| ^~~~~~
/home/xinanlin/xinanlin/pytorch/aten/src/ATen/native/mkldnn/xpu/Blas.cpp:464:8: error: ‘Scalar’ does not name a type
464 | const Scalar& alpha,
| ^~~~~~
/home/xinanlin/xinanlin/pytorch/aten/src/ATen/native/mkldnn/xpu/Blas.cpp:465:8: error: ‘Tensor’ does not name a type
465 | const Tensor& result) {
| ^~~~~~
```
### Versions
PyTorch version: 2.7.0a0+gitd0070ca
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,776,650,764
|
[CUDA] parse arch-conditional compute-capability when building extensions
|
eqy
|
closed
|
[
"module: cpp-extensions",
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
don't choke on arch-conditional compute capabilities e.g., `sm_90a`: #144037
cc @malfet @zou3519 @xmfan @ptrblck @msaroufim
| true
|
2,776,647,942
|
NotImplementedError: Output channels > 65536 not supported at the MPS device.
|
AimoneAndex
|
open
|
[
"module: convolution",
"triaged",
"module: mps"
] | 6
|
NONE
|
### 🐛 Describe the bug
I used the newest edited version of this fork and torch==2.7.0.dev,but it just said "NotImplementedError: Output channels > 65536 not supported at the MPS device",instead of giving me a corrrect output.
1.Run the following command :
wav = tts.tts(text="Hello,world!",speaker_wav="project/input/001.wav",language="en")
2.See error:
Traceback (most recent call last):
File "", line 1, in
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/api.py", line 312, in tts
wav = self.synthesizer.tts(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/utils/synthesizer.py", line 408, in tts
outputs = self.tts_model.synthesize(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/tts/models/xtts.py", line 410, in synthesize
return self.full_inference(text, speaker_wav, language, **settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/tts/models/xtts.py", line 471, in full_inference
(gpt_cond_latent, speaker_embedding) = self.get_conditioning_latents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/tts/models/xtts.py", line 354, in get_conditioning_latents
speaker_embedding = self.get_speaker_embedding(audio, load_sr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/tts/models/xtts.py", line 309, in get_speaker_embedding
self.hifigan_decoder.speaker_encoder.forward(audio_16k.to(self.device), l2_norm=True)
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/encoder/models/resnet.py", line 167, in forward
x = self.torch_spec(x)
^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/encoder/models/base_encoder.py", line 26, in forward
return torch.nn.functional.conv1d(x, self.filter).squeeze(1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Output channels > 65536 not supported at the MPS device.
### Versions
coqui-tts 0.25.1 ....../coqui-ai-TTS
coqui-tts-trainer 0.2.2
torch 2.7.0.dev20250108
torchaudio 2.6.0.dev20250108
torchvision 0.22.0.dev20250108
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,776,637,291
|
[Trace Python Dispatcher] Support FuncTorchInterpreter
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144444
* #144439
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,776,635,528
|
[mps/inductor] Add support for tanh().
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
MEMBER
|
Fixes test_tanh() in the inductor testsuite.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,776,607,992
|
[XPU] Fix Namespace Error due to Clang-tidy
|
ratnampa
|
closed
|
[
"module: cpu",
"open source",
"topic: not user facing",
"ciflow/xpu",
"module: xpu"
] | 5
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
Recent changes to aten/src/ATen/native/mkldnn/xpu/Blas.cpp from PR:https://github.com/pytorch/pytorch/pull/144014, leads to build failures.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,776,586,997
|
[CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt
|
eqy
|
closed
|
[
"module: cuda",
"module: cublas",
"open source",
"module: half",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"matrix multiplication",
"ciflow/rocm",
"ci-no-td"
] | 74
|
COLLABORATOR
|
Test for `cublasGemmEx` added, still need to figure out the best way to exercise the other APIs...
cc @ptrblck @msaroufim @csarofeen @xwang233
| true
|
2,776,548,910
|
[XPU] Fix Syntax Error due to Clang-tidy
|
ratnampa
|
closed
|
[
"module: cpu",
"open source"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
Recent changes to `aten/src/ATen/native/mkldnn/xpu/Blas.cpp` from PR:https://github.com/pytorch/pytorch/pull/144014, leads to build failures.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,776,545,036
|
[Trace Python dispatcher] Support torch.DispatchKey & torch.DispatchKeySet
|
yanboliang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144444
* __->__ #144439
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,776,529,847
|
add some basic shape_env info to tlparse
|
bdhirsh
|
open
|
[
"release notes: composability",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
It seems generally useful to add stuff like shape_env hints on a given compile into tlparse, especially since hints are used by the partitioner / inductor and can affect compile times and performance (see [doc](https://docs.google.com/document/d/1SyOGPKKVQAmnEY-LkGbcVj1H6TTMtsXu0DRVVQJqCGw/edit?tab=t.0#bookmark=id.bc18vdapcxwf))
Example tlparse: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpuO2Pey/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
From this code:
```
import torch
@torch.compile
def f(x, y):
if x.shape[0] > y.shape[0]:
return x.sum() + y.sum()
else:
return x.sum() - y.sum()
x = torch.randn(5, 6)
y = torch.randn(6, 3)
out = f(x, y)
x = torch.randn(7, 3)
y = torch.randn(8, 3)
out = f(x, y)
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144438
* #144097
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,776,522,917
|
[dynamo][graph break] omegaconfig ListConfig `__contains__`
|
anijain2305
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Fixing this graph break should help with the performance regression in https://github.com/pytorch/pytorch/issues/132872
~~~
+ @unittest.skipIf(not HAS_OMEGACONG, "missing omegaconf package")
+ def test_omegaconf_listconfig_contains(self):
+ def fn(cfg, x):
+ if 1 in cfg:
+ return torch.sin(x)
+ return torch.cos(x)
+
+ config = list_config = OmegaConf.create([1, 2, 3, {"key": "value"}])
+
+ x = torch.randn(4)
+ opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
+ self.assertEqual(fn(config, x), opt_fn(config, x))
+
+
~~~
### Error logs
```
File "/home/anijain/local/pytorch/torch/_dynamo/symbolic_convert.py", line 1076, in run
while self.step():
^^^^^^^^^^^
File "/home/anijain/local/pytorch/torch/_dynamo/symbolic_convert.py", line 986, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/anijain/local/pytorch/torch/_dynamo/symbolic_convert.py", line 1676, in COMPARE_OP
self.push(compare_op_handlers[inst.argval](self, self.popn(2), {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anijain/local/pytorch/torch/_dynamo/variables/builtin.py", line 1008, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anijain/local/pytorch/torch/_dynamo/variables/builtin.py", line 859, in builtin_dispatch
unimplemented(error_msg)
File "/home/anijain/local/pytorch/torch/_dynamo/exc.py", line 356, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: builtin: eq [<class 'torch._dynamo.variables.user_defined.UserDefinedObjectVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>] False
from user code:
File "/home/anijain/local/pytorch/test/dynamo/test_repros.py", line 6137, in fn
if 1 in cfg:
File "/home/anijain/.conda/envs/pytorch-3.11/lib/python3.11/site-packages/omegaconf/listconfig.py", line 606, in __contains__
if x == item:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test/dynamo/test_repros.py ReproTests.test_omegaconf_listconfig_contains
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
````
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,776,505,215
|
[aarch64] fix TORCH_CUDA_ARCH_LIST for cuda arm build
|
tinglvv
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Fixes #144037
Root cause is CUDA ARM build did not call `.ci/manywheel/build_cuda.sh`, but calls `.ci/aarch64_linux/aarch64_ci_build.sh `instead. Therefore, https://github.com/pytorch/pytorch/blob/main/.ci/manywheel/build_cuda.sh#L56 was not called for CUDA ARM build.
Adding the equivalent of the code to `.ci/aarch64_linux/aarch64_ci_build.sh` as a WAR.
In the future, we should target to integrate the files in .ci/aarch64_linux/aarch64_ci_build.sh back to .ci/manywheel/build_cuda.sh.
cc @ptrblck @atalman @malfet @nWEIdia @eqy
| true
|
2,776,498,030
|
Need newer driver
|
drisspg
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144435
| true
|
2,776,478,366
|
Int8 Inference Slowdown Comparing to FP32 when using PyTorch 2 Export Quantization with X86 Backend through Inductor
|
YixuanSeanZhou
|
closed
|
[
"oncall: quantization"
] | 20
|
NONE
|
### 🐛 Describe the bug
INT8 inference is significantly slower comparing to FP32 after performing quantization using the tutorial: [PyTorch 2 Export Quantization with X86 Backend through Inductor](https://pytorch.org/tutorials/prototype/pt2e_quant_x86_inductor.html)
When benchmarking the inference speed I observe:
```
Python before quant inference time: 0.022294288396835327
Python inference time after quant: 1.5116438150405884
```
A complete repro of the issue:
```
import torch
import torchvision
import tempfile
import time
import sys
import os
torch.manual_seed(42)
from torch._export import capture_pre_autograd_graph
from torch.ao.quantization.quantize_pt2e import convert_pt2e, prepare_pt2e
# from torch.ao.quantization.quantizer.xnnpack_quantizer import (
# XNNPACKQuantizer, get_symmetric_quantization_config,
# )
from torch.ao.quantization.quantizer.x86_inductor_quantizer import (
X86InductorQuantizer, get_default_x86_inductor_quantization_config
)
import torch._inductor.config as config
config.cpp_wrapper = True
RUN_NUM = 1000
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print("Size (MB):", os.path.getsize("temp.p")/1e6)
os.remove("temp.p")
def main(argv):
workdir="/tmp"
# Step 1: Capture the model
args = (torch.randn(1, 3, 224, 224),)
m = torchvision.models.resnet18(weights=torchvision.models.ResNet18_Weights.DEFAULT).eval()
m1 = m.eval()
print("before size")
print_size_of_model(m)
with torch.no_grad():
compiled = torch.compile(m1, backend="inductor")
before = time.time()
for i in range(RUN_NUM):
compiled(*args)
print(f"Python inference time: {(time.time() - before) / RUN_NUM}")
# torch.export.export_for_training is only avaliable for torch 2.5+
m = capture_pre_autograd_graph(m, args)
# Step 2: Insert observers or fake quantize modules
# quantizer = XNNPACKQuantizer().set_global(
# get_symmetric_quantization_config())
quantizer = X86InductorQuantizer().set_global(
get_default_x86_inductor_quantization_config())
m = prepare_pt2e(m, quantizer)
# Step 2.5 fake calibration
m(*args)
# # Step 3: Quantize the model
m = convert_pt2e(m, fold_quantize=True)
print(m)
print("after size")
print_size_of_model(m)
with torch.autocast(device_type="cpu", dtype=torch.bfloat16, enabled=True), torch.no_grad():
before_compile_time = time.time()
compiled_quant = torch.compile(m, backend="inductor")
print("finnished compiliation", time.time() - before_compile_time)
before = time.time()
for i in range(RUN_NUM // 100):
compiled_quant(*args)
# print("finished run", i)
print(f"Python inference before quant time after quant: {(time.time() - before) / (RUN_NUM // 100)}")
os.makedirs(workdir, exist_ok=True)
if __name__ == '__main__':
main(sys.argv)
```
### Versions
Versions
```
python collect_env.py
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.5.1+cu121
[pip3] torchaudio==2.5.1+cu121
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.5.1+cu121 pypi_0 pypi
[conda] torchaudio 2.5.1+cu121 pypi_0 pypi
[conda] torchvision 0.20.1+cu121 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,776,463,633
|
Torch 2.6.0 rc is broken on Amazon linux 2023
|
atalman
|
closed
|
[
"high priority",
"triage review",
"module: binaries"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Follow instructions from: https://github.com/pytorch/pytorch/issues/138324
install torch=2.6.0
```
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib64/python3.9/site-packages/torch/__init__.py", line 379, in <module>
from torch._C import * # noqa: F403
ImportError: libcusparseLt.so.0: cannot open shared object file: No such file or directory
```
### Versions
2.6.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @osalpekar
| true
|
2,776,463,102
|
[ROCm] hipblaslt rowwise f8 gemm
|
jeffdaily
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: rocm",
"release notes: nn",
"ciflow/rocm"
] | 14
|
COLLABORATOR
|
hipblaslt added rowwise f8 gemm support. Integrate with scaled_mm.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,776,457,381
|
inductor slow kernel choice for max(x) if x is not contiguous
|
vkuzo
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I noticed that if `x` is not contiguous, torchinductor generates unexpectedly slow cuda kernels for `torch.max(x)`. Small repro:
```
import torch
import fire
from torch._inductor.utils import do_bench_using_profiling
torch.manual_seed(0)
def get_max(x):
x = torch.max(x)
return x
def run(is_contiguous: bool = True):
x = torch.randn(4096, 8192, dtype=torch.bfloat16, device="cuda")
if not is_contiguous:
x = x.t().contiguous().t()
get_max_c = torch.compile(get_max)
# warmup
y = get_max_c(x)
# perf
duration_microseconds = do_bench_using_profiling(lambda: get_max_c(x))
print('duration in microseconds', duration_microseconds)
if __name__ == '__main__':
fire.Fire(run)
```
Running this script with `is_contiguous=True` results in the expected (to me) pattern of a two stage reduction. Running this script with `is_contiguous=False` results in a three stage reduction, which seems to be significantly slower than the two-stage variant - ~8x slower on my machine for this example input.
Example script calls with logs:
```
> TORCH_LOGS_FORMAT="short" TORCH_LOGS="output_code" python ~/local/tmp/20250108_max_repro.py --is_contiguous True 2>&1 | with-proxy gh gist create
- Creating gist...
✓ Created secret gist
https://gist.github.com/vkuzo/4798af9dbd1a13ff66d9586312c04d03
> TORCH_LOGS_FORMAT="short" TORCH_LOGS="output_code" python ~/local/tmp/20250108_max_repro.py --is_contiguous False 2>&1 | with-proxy gh gist create
- Creating gist...
✓ Created secret gist
https://gist.github.com/vkuzo/6b9d3e1397ff808b4897d75a59b7eaab
```
For context, I noticed this during a refactor of torchao float8 code (https://github.com/pytorch/ao/pull/1481). I can work around by manually passing contiguous tensors to `max(tensor)` in the modeling code, but sharing this as it was unexpected.
### Versions
PyTorch commit: 768d73f6929be2a6eb81fe7424416dceb4a4aca9 (main branch)
hardware: NVIDIA H100
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng
| true
|
2,776,439,263
|
Request for: torch-2.5.1-cp313-cp313-win_amd64.whl
|
tlh45342
|
closed
|
[] | 1
|
NONE
|
I don't see the posting for 313 there.
I am guessing the Mac people would be interested as well?
| true
|
2,776,423,628
|
Fix overflows in checkInBoundsForStorage
|
mikaylagawarecki
|
closed
|
[
"Stale"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144429
| true
|
2,776,419,294
|
[ONNX] Update images and APIs to onnx_dynamo.rst
|
pytorchbot
|
closed
|
[
"open source",
"release notes: onnx"
] | 1
|
COLLABORATOR
|
Update the result image of exporting, and delete the functions/class that belongs to `torch.onnx.dynamo_export`
| true
|
2,776,400,115
|
Inductor dashboard benchmarks: swap unused freeze_autotune_cudagraphs workflow for cppwrapper workflow
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144427
GitHub limits us to 10 inputs per workflow_dispatch job, so this PR swaps out an input that is no longer used for the cppwrapper input. See [the HUD](https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Thu%2C%2002%20Jan%202025%2016%3A30%3A07%20GMT&stopTime=Thu%2C%2009%20Jan%202025%2016%3A30%3A07%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=gh/benjaminglass1/53/orig&lCommit=4c3d3ad3c7886cbda9705b41c6db5fa7da0d6fe9&rBranch=main&rCommit=00df63f09f07546bacec734f37132edc58ccf574) for an example showing that it works and displays sane output.
| true
|
2,776,293,690
|
[Pipelining] Fix ZeroBubblePP+DDP grad reduction issue
|
wconstab
|
closed
|
[
"oncall: distributed"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144521
* __->__ #144426
* #144352
* #144345
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.