id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,877,582,641
|
[test][do not merge] Upgrade oneDNN to v3.7(3)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,877,580,935
|
[test][do not merge] Upgrade oneDNN to v3.7 (2)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,877,576,911
|
[test][do not merge] Upgrade oneDNN to v3.7 (1)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,877,534,181
|
[inductor][user triton] comprehensive_padding + user-defined triton kernels can produce wrong results
|
davidberard98
|
closed
|
[
"high priority",
"triage review",
"oncall: pt2",
"module: inductor",
"module: user triton"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
If a mm kernel produces non-contiguous outputs due to comprehensive padding, and that output is passed into a user-defined triton kernel, then the strides may be passed incorrectly to the user-defined triton kernel. Repro below:
<details>
```python
import torch
import triton
import triton.language as tl
@triton.jit
def addition_kernel(x_ptr, y_ptr, out_ptr, stride_x0, stride_x1, stride_y0, stride_y1, stride_o0, stride_o1, SIZE_0: tl.constexpr, SIZE_1: tl.constexpr, BLOCK_SIZE_0: tl.constexpr, BLOCK_SIZE_1: tl.constexpr):
for i0 in range(0, SIZE_0, BLOCK_SIZE_0):
off0 = tl.arange(0, BLOCK_SIZE_0) + i0
mask0 = off0 < SIZE_0
for i1 in range(0, SIZE_1, BLOCK_SIZE_1):
off1 = tl.arange(0, BLOCK_SIZE_1) + i1
mask1 = off1 < SIZE_1
off_x = stride_x0 * off0[:, None] + stride_x1 * off1[None, :]
off_y = stride_y0 * off0[:, None] + stride_y1 * off1[None, :]
off_out = stride_o0 * off0[:, None] + stride_o1 * off1[None, :]
mask = mask0[:, None] & mask1[None, :]
x_val = tl.load(x_ptr + off_x, mask=mask)
y_val = tl.load(y_ptr + off_y, mask=mask)
res = x_val + y_val
tl.store(out_ptr + off_out, res, mask=mask)
@torch._library.triton_op("testing::triton_add", mutates_args=())
def triton_add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
SIZE_0, SIZE_1 = x.size()
out = torch.zeros((SIZE_0, SIZE_1), dtype=x.dtype, device=x.device)
torch._library.capture_triton(addition_kernel)[(1,)](
x,
y,
out,
x.stride(0),
x.stride(1),
y.stride(0),
y.stride(1),
out.stride(0),
out.stride(1),
SIZE_0,
SIZE_1,
64,
16,
)
return out
def fn(x, y, z):
r = x @ y
return triton_add(r, z)
def get_input():
x = torch.randn((1024, 1024), device="cuda", dtype=torch.bfloat16)
y = torch.randn((1024*16 - 7, 1024), device="cuda", dtype=torch.bfloat16).T
z = torch.randn((1024, 1024*16 - 7), device="cuda", dtype=torch.bfloat16)
return x, y, z
x, y, z = get_input()
expected = torch.compile(fn)(x, y, z)
actual = fn(x, y, z)
actual2 = fn(x, y, z)
torch.testing.assert_close(actual2, actual)
torch.testing.assert_close(expected, actual)
```
</details>
### Versions
pytorch 80d3afc69, triton 00dad9dba. H100.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @amjames @aakhundov @oulgen
| true
|
2,877,486,394
|
update _unsafe_set_version_counter to accept lists of tensors
|
zqwenn
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I encountered an issue that has been resolved by this [Pull Request](https://github.com/pytorch/pytorch/pull/137921). I would like to request its inclusion in version 2.6+.
### Versions
torch==2.6.0
| true
|
2,877,460,832
|
Inconsistent results from `is_compile_supported ` with equivalent device identifiers
|
default1360
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
### 🐛 Describe the bug
The `is_compile_supported` function returns inconsistent results for equivalent device identifiers:
- `is_compile_supported("cuda")` returns `True`
- `is_compile_supported("cuda:0")` returns `False`
If it's not a bug, feel free to close this issue.
```
import torch
from torch._dynamo.utils import is_compile_supported
if not torch.cuda.is_available():
exit()
result_cuda = is_compile_supported("cuda")
result_cuda0 = is_compile_supported("cuda:0")
print("result_cuda:", result_cuda)
print("result_cuda0:", result_cuda0)
```
### Versions
torch 2.6.0
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,877,410,582
|
Use /permissive- for torch libraries in MSVC builds
|
cyyever
|
open
|
[
"module: windows",
"triaged",
"open source",
"windows-triaged",
"Stale",
"release notes: jit",
"topic: not user facing"
] | 5
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,877,379,926
|
[dynamo][optimizers] Install ID_GUARDED tensors into the Fx graph
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147824
Earlier, with inline flag we were lifting id-guarded tensors to the inputs to the Fx graph. But this offers no benefit. Main idea behind lifting parameters as inputs was to reuse the compilation units across many instances of the nn-module. However, if we are guarding on the `id`, we are explicitly specializing the compiled artifact to the parameter.
This PR installs the parameters back into the graph. The benefit is removal of all pre-graph bytecode to extract the id-guarded tensors from locals/globals. This increases speedup from 1.67x to 1.75x for an internal model that has large number of optimizer parameters.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,877,362,536
|
remove asserttion in expand_to_full_mesh_op_strategy
|
zqwenn
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"Stale",
"release notes: distributed (dtensor)"
] | 6
|
CONTRIBUTOR
|
Fixes #147732
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,877,358,060
|
`AssertionError: Mixing fake modes NYI` in FakeTensorMode context
|
default1360
|
closed
|
[
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: pt2-dispatcher"
] | 2
|
NONE
|
### 🐛 Describe the bug
When using FakeTensorMode in conjunction with FX Graph operations in PyTorch, an AssertionError: Mixing fake modes NYI is raised. I'm not certain whether this behavior is expected or if it's a bug. If it's not a bug, feel free to close this issue.
```
import torch
from torch._subclasses import FakeTensorMode
import torch.fx as fx
def graph_call_function(graph, fn, *args, **kwargs):
fake_args, fake_kwargs = torch.utils._pytree.tree_map(
lambda node: node.meta["val"] if isinstance(node, fx.Node) else node,
(args, kwargs),
)
with FakeTensorMode() as fake_mode:
fake_result = fn(*fake_args, **fake_kwargs)
node = graph.call_function(fn, args, kwargs)
node.meta["val"] = fake_result
return node
# Create fake tensors and FX graph with nodes containing them in metadata
fake_mode = FakeTensorMode()
real_tensor = torch.rand(4)
fake_tensor = fake_mode.from_tensor(real_tensor)
graph = fx.Graph()
placeholder_node = graph.placeholder('x')
placeholder_node.meta["val"] = fake_tensor
# Create a node that stores FakeTensor in its metadata
node = graph_call_function(graph, torch.add, placeholder_node, placeholder_node)
```
### Versions
torch 2.6.0
cc @chauhang @penguinwu @eellison @zou3519 @bdhirsh
| true
|
2,877,350,911
|
Use torch_compile_options for c10 libraries
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: build",
"topic: improvements",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/xpu"
] | 28
|
COLLABORATOR
|
c10, c10_cuda, c10_hip and c10_xpu are given additional compile options by torch_compile_options, which are more restrictive and can help reveal potential bugs inside the code.
| true
|
2,877,347,103
|
[WIP][ptd][nccl] use current-stream as nccl-stream under async=False mode
|
cenzhaometa
|
open
|
[
"oncall: distributed",
"fb-exported",
"Stale",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 9
|
CONTRIBUTOR
|
Summary:
PTD current workflow:
- PTD creates its own dedicated `ncclStream` for comm operation
- it will first add a dependency on current-stream (typically the compute stream) to ensure tensors are ready before invoking collective
such stream synchronization become expensive in Inference world (cpu overhead: 70us vs GPU kernel time: 160us).
This diff:
- introduces a new env `TORCH_NCCL_USE_CURRENT_STREAM_AS_NCCL_STREAM=1`
- when it's specified, PTD uses current-stream as the nccl-stream and avoids stream sync
this helps shave off 50% CPU overhead **(70us -> 35us)**, which reduce total CPU/GPU from **230us to 195us by 15%**
Test Plan:
# before
```
[cenzhao@devgpu039.atn3 ~/fbsource/fbcode (2265d32f0)]$ buck2 run @//mode/opt-amd-gpu -c fbcode.split-dwarf=True //param_bench/train/comms/pt:launcher -- --launcher mpi --nnode 1 --collective all_reduce --b 20M --e 20M --data-type bfloat16 --backend nccl --n 100 --w 5 --envs "NCCL_DEBUG_FILE=/tmp/dedicated_log_rccl.%h.%p.log;NCCL_DEBUG=INFO;NCCL_DEBUG_SUBSYS=INIT,COLL;MSCCL_ALGO_DIR=/data/users/${USER}/fbsource/third-party/rccl/develop/tools/msccl-algorithms;RCCL_MSCCLPP_THRESHOLD=$((128*1024*1024));RCCL_MSCCLPP_ENABLE=1;TORCH_NCCL_USE_TENSOR_REGISTER_ALLOCATOR_HOOK=1;" --size-start-profiler 20M
```
https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/dynocli/devgpu039.atn3.facebook.com/rank-0.Feb_24_16_19_28.354787.pt.trace.json.gz&bucket=hpc_traces
{F1975408857}
- c10d::allreduce_(69us)
- cudaStreamSync (23us)
- nccl::all_reduce(26us)
# after
```
[cenzhao@devgpu039.atn3 ~/fbsource/fbcode (2265d32f0)]$ buck2 run @//mode/opt-amd-gpu -c fbcode.split-dwarf=True //param_bench/train/comms/pt:launcher -- --launcher mpi --nnode 1 --collective all_reduce --b 20M --e 20M --data-type bfloat16 --backend nccl --n 100 --w 5 --envs "NCCL_DEBUG_FILE=/tmp/dedicated_log_rccl.%h.%p.log;NCCL_DEBUG=INFO;NCCL_DEBUG_SUBSYS=INIT,COLL;MSCCL_ALGO_DIR=/data/users/${USER}/fbsource/third-party/rccl/develop/tools/msccl-algorithms;RCCL_MSCCLPP_THRESHOLD=$((128*1024*1024));RCCL_MSCCLPP_ENABLE=1;TORCH_NCCL_USE_TENSOR_REGISTER_ALLOCATOR_HOOK=1;TORCH_NCCL_USE_CURRENT_STREAM_AS_NCCL_STREAM=1" --size-start-profiler 20M
```
https://www.internalfb.com/intern/perfdoctor/trace_view?filepath=tree/traces/dynocli/devgpu039.atn3.facebook.com/rank-4.Feb_24_16_22_56.534269.pt.trace.json.gz&bucket=hpc_traces
{F1975408962}
- c10d:allreduce_(37us)
- cudaStreamSync (gone)
- nccl::all_reduce(20us)
Differential Revision: D70135605
Resolves #147729
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,877,302,664
|
[dynamo][guards] Dont insert ID and TENSOR_MATCH at the same time
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147824
* __->__ #147819
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,877,301,660
|
Bug in `torch.ao.nn.quantized.Sigmoid` Parameter Restoration after `state_dict ` Loading
|
vwrewsge
|
open
|
[
"oncall: quantization"
] | 1
|
NONE
|
### 🐛 Describe the bug
There seems to be an issue in PyTorch's quantized `Sigmoid` module (`nnq_Sigmoid`) where the quantization parameters (`scale` and `zero_point`) are not properly restored when loading the state dictionary (`state_dict`) into a newly initialized module with different initial parameters.
Code:
```
import torch
from torch.ao.nn.quantized import Sigmoid as nnq_Sigmoid
def test_sigmoid_serialization():
# Original parameters
scale_original = 0.1
zero_point_original = 5
# Create original module and save state
quant_mod_original = nnq_Sigmoid(scale_original, zero_point_original)
state_dict = quant_mod_original.state_dict()
# New parameters (different from original)
scale_new = 0.5
zero_point_new = 10
# Create new module and load original state
quant_mod_new = nnq_Sigmoid(scale_new, zero_point_new)
quant_mod_new.load_state_dict(state_dict)
# Check if parameters were restored
print("quant_mod_new.output_scale:", quant_mod_new.output_scale)
print("scale_original: ", scale_original)
print("quant_mod_new.output_zero_point:", quant_mod_new.output_zero_point)
print("zero_point_original:", zero_point_original)
test_sigmoid_serialization()
```
Output:
The parameters scale and zero_point are not restored correctly after loading the state dictionary. The output shows that the parameters are not matching the original values, which implies that the state dictionary is not correctly restoring the quantization parameters.
```
quant_mod_new.output_scale: 0.5
scale_original: 0.1
quant_mod_new.output_zero_point: 10
zero_point_original: 5
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim
| true
|
2,877,289,439
|
[test][do not merge]Upgrade oneDNN to v3.7 (VS2019)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 4
|
COLLABORATOR
|
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,877,244,407
|
DISABLED test_inductor_broadcast (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"oncall: distributed",
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d",
"oncall: pt2"
] | 15
|
NONE
|
Platforms: inductor, linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_broadcast&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37757262792).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_broadcast`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/test_c10d_functional_native.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr @chauhang @penguinwu
| true
|
2,877,232,714
|
torch.compile with backend tensorrt fails with constraint violation issues
|
peri044
|
closed
|
[
"oncall: pt2",
"module: dynamic shapes"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Our basic test case is
```py
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = torch.nn.Conv2d(3, 16, 3, stride=1, bias=True)
self.relu = torch.nn.ReLU()
def forward(self, x):
out = self.conv(x)
out = self.relu(out)
return out
model = MyModule().eval().cuda()
compile_spec = {
"device": torchtrt.Device("cuda:0"),
"enabled_precisions": {torch.float},
"ir": ir,
"pass_through_build_failures": True,
"min_block_size": 1,
"cache_built_engines": False,
"reuse_cached_engines": False,
}
input_bs4 = torch.randn((4, 3, 224, 224)).to("cuda")
torch._dynamo.mark_dynamic(input_bs4, 0, min=2, max=8)
# Compile the model
trt_model = torch.compile(model, backend="tensorrt", options=compile_spec)
trt_model(input_bs4)
```
This testcases passes with PyTorch 2.6 but encounters constraint violation issue with latest torch nightly. Please find the attached log [out.txt](https://github.com/user-attachments/files/18958256/out.txt)
### Error logs
```py
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['x'].size()[0])! For more information, run with TORCH_LOGS="+dynamic".
E - Not all values of L['x'].size()[0] = L['x'].size()[0] in the specified range L['x'].size()[0] <= 8 satisfy the generated guard 4 <= L['x'].size()[0] and L['x'].size()[0] <= 8
```
### Versions
[pip3] torch==2.7.0.dev20250220+cu124
[pip3] torch-mlir==20250108.338
[pip3] torch_tensorrt==2.7.0.dev0+94ce1e0b4
[pip3] torchmetrics==1.4.0.post0
[pip3] torchprofile==0.0.4
[pip3] torchsurgeon==0.1.2
[pip3] torchvision==0.22.0.dev20250220+cu124
[pip3] triton==3.2.0
cc @chauhang @penguinwu @ezyang @bobrenjc93 @angelayi
| true
|
2,877,145,081
|
unbind_copy opinformation cause exception while running test_dtensor_ops.py
|
dayanandav
|
open
|
[
"oncall: distributed",
"triaged",
"bug",
"module: dtensor"
] | 1
|
NONE
|
### 🐛 Describe the bug
["unbind_copy"](https://github.com/pytorch/pytorch/blob/main/test/distributed/tensor/test_dtensor_ops.py#L435) entry under dtensor_fails(xfail) list cause below exception.
Cmd : python3 -m pytest -vs test_dtensor_ops.py --collect-only
Exception :
File "/home/pytorch/test/distributed/tensor/test_dtensor_ops.py", line 508, in <module>
class TestDTensorOps(DTensorOpTestBase):
File "/home/pytorch/test/distributed/tensor/test_dtensor_ops.py", line 517, in TestDTensorOps
@skipOps("TestDTensorOps", "test_dtensor_op_db", dtensor_fails)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pytorch/test/distributed/tensor/test_dtensor_ops.py", line 53, in skipOps
assert len(matching_opinfos) >= 1, f"Couldn't find OpInfo for {xfail}"
^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Couldn't find OpInfo for ('unbind_copy', '', None, None, True)
### Versions
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-1071-nvidia-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.71
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] optree==0.14.0
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.2 py311h5d046bc_0 conda-forge
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.14.0 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,877,057,618
|
Clean temporary directory at exit
|
arthurlw
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Issue: A temporary directory is created in [pytorch/torch/distributed/nn/jit/instantiator.py](https://github.com/arthurlw/pytorch/blob/clean-temp-directory-at-exit/torch/distributed/nn/jit/instantiator.py) but is never cleaned up, leading to a ResourceWarning on program exit.
Solution: Registered an `atexit` handler to properly clean up the temporary directory when the program exits.
Fixes #147744
**Line 23 in [0a49f8f](https://github.com/arthurlw/pytorch/commit/0a49f8fd3d34ee31f39bf7029ebb0b564433ac48)**
```python
23 atexit.register(_TEMP_DIR.cleanup)
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,877,038,703
|
Enable ASAN in CUDA tests
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,877,030,757
|
[cutlass backend] try fix standlone runner test
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147811
Differential Revision: [D70147859](https://our.internmc.facebook.com/intern/diff/D70147859/)
Trying to fix this test one last time, especially when mixed mm is getting removed.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
@diff-train-skip-merge
| true
|
2,877,023,409
|
[HSDP2] `TORCH_NCCL_AVOID_RECORD_STREAMS` x `use_deterministic_algorithms` => NaN Gradient
|
leonardo0lyj
|
closed
|
[
"oncall: distributed",
"module: fsdp"
] | 2
|
NONE
|
### 🐛 Describe the bug
Hey Andrew @awgu, as a big fan of FSDP2, I find an potential BC issue with `TORCH_NCCL_AVOID_RECORD_STREAMS = True` 😄
*Demand*
- HSDP (2D mesh in FSDP2)
- `TORCH_NCCL_AVOID_RECORD_STREAMS = True`
- `torch.use_deterministic_algorithms(True)`
*Result*
- After `.backward()`, sharded parameter gradient has NaN, non-deterministically.
- NaN only happens when both `TORCH_NCCL_AVOID_RECORD_STREAMS = True` and `torch.use_deterministic_algorithms(True)`
*Minimal Code*
```python3
class TestTorchHSDP(DTensorTestBase):
@property
def world_size(self) -> int:
return 4
@with_comms
def test_torch_hsdp(self):
# NOTE:
# `TORCH_NCCL_AVOID_RECORD_STREAMS`x`use_deterministic_algorithms`=> grads have NaN
# 0 x 0 => No
# 0 x 1 => No
# 1 x 0 => No
# 1 x 1 => Yes
os.environ["TORCH_NCCL_AVOID_RECORD_STREAMS"] = "1"
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
torch.use_deterministic_algorithms(True)
# mesh & fsdp2
from torch.distributed.device_mesh import init_device_mesh # torch version: 2.4.1
from torch.distributed._composable.fsdp import fully_shard, FSDPModule
mesh = init_device_mesh("cuda", (2, 2), mesh_dim_names=("replicate", "shard"))
# llama model
from transformers import AutoConfig, LlamaModel # transformer version: 4.46.1 (same for other version)
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
dir_path = os.path.dirname(os.path.realpath(__file__))
config = AutoConfig.from_pretrained(os.path.join(dir_path, "../llama/llama_config.json"))
config.num_hidden_layers = 4
config.hidden_size = 32
config.intermediate_size = 88
config.max_position_embeddings = 32
config.vocab_size = 512
torch.manual_seed(0)
model: nn.Module = LlamaModel(config).cuda()
# fsdp
fully_shard_fn = functools.partial(
fully_shard,
mesh=mesh,
# reshard_after_forward? # same NaN
# mixed precision? # same NaN
)
for submod in model.modules():
if isinstance(submod, LlamaDecoderLayer):
fully_shard_fn(submod)
fully_shard_fn(model)
# model.set_reshard_after_backward()? # same NaN
# data
torch.manual_seed(self.rank)
# microbatches
for i in range(99):
if self.rank == 0:
print(f"[DEBUG] microbatch {i}")
input = torch.randint(low=0, high=config.vocab_size, size=(4, 4), device="cuda")
output = model(input).last_hidden_state
output.mean().backward()
# check NaN grad
fsdp_params = []
for module in cast(nn.Module, model).modules():
if isinstance(module, FSDPModule):
if fsdp_param_group := module._get_fsdp_state()._fsdp_param_group:
fsdp_params += fsdp_param_group.fsdp_params
for fsdp_param in fsdp_params:
sharded_param = fsdp_param.sharded_param
if not sharded_param.requires_grad:
continue
if sharded_param.grad is None:
continue
local_grad = sharded_param.grad._local_tensor
self.assertEqual(torch.isnan(local_grad).sum().item(), 0, msg=f"{local_grad}")
replicate_grad = sharded_param.grad.full_tensor()
self.assertEqual(torch.isnan(replicate_grad).sum().item(), 0, msg=f"{replicate_grad}")
```
*llama_config.json*
```json
{
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 2048,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"pad_token_id": 0,
"rms_norm_eps": 1e-06,
"tie_word_embeddings": false,
"torch_dtype": "float32",
"transformers_version": "4.46.1",
"use_cache": true,
"vocab_size": 32000
}
```
*Potential Culprit*
- Deterministic algorithm fills empty tensor (gradient reduce input/output) with [NaN value](https://pytorch.org/docs/stable/generated/torch.empty_like.html)
- This NaN is exposed in sharded parameter grads when no record stream.
I have been digging this issue for 3 days, still no idea yet. 🤔️ How do you think? Appreciated 🙏
### Versions
PyTorch version: 2.4.1+gitee1b680
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.31
Python version: 3.11.10 (main, Nov 21 2024, 15:54:09) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.15.120.bsk.2-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-40GB
GPU 1: NVIDIA A800-SXM4-40GB
GPU 2: NVIDIA A800-SXM4-40GB
GPU 3: NVIDIA A800-SXM4-40GB
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 120
On-line CPU(s) list: 0-119
Thread(s) per core: 2
Core(s) per socket: 30
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz
Stepping: 6
CPU MHz: 2294.616
BogoMIPS: 4589.23
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.8 MiB
L1i cache: 1.9 MiB
L2 cache: 75 MiB
L3 cache: 108 MiB
NUMA node0 CPU(s): 0-59
NUMA node1 CPU(s): 60-119
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.12.1
[pip3] torch==2.4.1+gitee1b680
[pip3] torchdistx==0.3.0.dev0+cu121
[pip3] torchvision==0.17.0+b2383d4
[pip3] triton==3.0.0
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,876,993,512
|
Fix crash in -[PTMCoreMLCompiler _compileModel:atPath:]
|
dinhvh
|
closed
|
[
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 4
|
CONTRIBUTOR
|
Summary:
We could hit one of those exceptions:
https://github.com/apple/coremltools/blob/main/modelpackage/src/ModelPackage.cpp#L205-L225
And it would make this code path crash.
Test Plan: build.
Differential Revision: D70122378
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,876,874,755
|
Back out "use copy2d in h2d/d2h copy when possible (#146256)"
|
s4ayub
|
open
|
[
"fb-exported",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
Summary:
Original commit changeset: aa7d1b82ac9d
Original Phabricator Diff: D69088122
Reviewed By: banitag1, 842974287, ngimel
Differential Revision: D70118904
| true
|
2,876,868,609
|
[AOTI][refactor] Fix a typo
|
desertfire
|
closed
|
[
"module: cpu",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147803
* __->__ #147807
* #147806
* #147805
Summary: defination -> definition
Differential Revision: [D70146182](https://our.internmc.facebook.com/intern/diff/D70146182)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,868,554
|
[AOTI][refactor] Replace run_command_and_check with CppBuilder.build
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147803
* #147807
* __->__ #147806
* #147805
Summary: Consolidate cpp compilation action to CppBuilder. Reland https://github.com/pytorch/pytorch/pull/147680
Differential Revision: [D70146183](https://our.internmc.facebook.com/intern/diff/D70146183)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,868,501
|
[AOTI][refactor] Rename use_absolute_path to use_relative_path
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bc breaking",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147803
* #147807
* #147806
* __->__ #147805
Summary: The option really means to compile a cpp file using its basename instead of the its full path. Reland https://github.com/pytorch/pytorch/pull/147679.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D70146184](https://our.internmc.facebook.com/intern/diff/D70146184)
| true
|
2,876,854,418
|
[ca] side-effect free inital trace: compiled_args
|
xmfan
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: distributed (c10d)",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going",
"ciflow/slow",
"module: compiled autograd",
"ci-no-td"
] | 9
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147891
* __->__ #147804
* #147796
* #147242
const methods to prevent accidental mutation. changes mainly in Error nodes and PyNode.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,876,782,861
|
[AOTI][refactor] Consolidate CppBuilder.build and CppBuilder.build_fbcode_cpu_re
|
desertfire
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147803
* #147807
* #147806
* #147805
Summary: Let CppBuilder handle all the cpp build logic
Differential Revision: [D70146185](https://our.internmc.facebook.com/intern/diff/D70146185)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,772,461
|
Udpate hw requirement for FP64 on "Getting Started on Intel GPU"
|
ZhaoqiongZ
|
closed
|
[
"open source",
"Merged",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Fixes #147731
| true
|
2,876,714,391
|
Incorrect Gradients at Boundary Points for `torch.nn.functional.hardswish`
|
vwrewsge
|
closed
|
[
"module: autograd",
"triaged",
"actionable"
] | 1
|
NONE
|
### 🐛 Describe the bug
The gradients of the hardswish function at boundary points (specifically at -3.0 and 3.0) are incorrect. The gradient at -3.0 should be 0, and the gradient at 3.0 should be 1.0. However, the current implementation produces incorrect values at these points.
# Code
```
import torch
# Test case to check hardswish_backward gradients at boundaries
input_values = torch.tensor([-3.0, 3.0, -2.0, 2.0], requires_grad=True)
out = torch.nn.functional.hardswish(input_values)
out.backward(torch.ones_like(input_values))
# Expected gradients:
# -3.0: should be 0 (flat region), but current code gives (x/3 +0.5) = -0.5
# 3.0: should be 1.0 (linear region), but current code gives 1.5
# -2.0: correct gradient (2*(-2)+3)/6 = -1/6 ≈ -0.1667
# 2.0: correct gradient (2*2+3)/6 = 7/6 ≈ 1.1667
expected_grad = torch.tensor([0.0, 1.0, -1/6, 7/6], dtype=torch.float32)
print(input_values.grad)
print(expected_grad)
```
# Output
```
tensor([-0.5000, 1.5000, -0.1667, 1.1667])
tensor([ 0.0000, 1.0000, -0.1667, 1.1667])
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,876,704,082
|
test
|
eellison
|
open
|
[
"Stale",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147800
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,678,180
|
Failed to run autotuning code block: Triton Error [CUDA]: device-side assert triggered
|
bhack
|
closed
|
[
"triaged",
"oncall: pt2",
"module: aotinductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
After export I was trying to aoti compile and package a model but the process failed.
### Error logs
Here part of the inductor+ log near the failure.
[partial_inductor.log](https://github.com/user-attachments/files/18952979/partial_inductor.log)
### Versions
nightly
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi
| true
|
2,876,622,592
|
stage 1 of depreate silent fallback of tuning gemm
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 34
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147798
Differential Revision: [D70045778](https://our.internmc.facebook.com/intern/diff/D70045778/)
context:
https://github.com/pytorch/pytorch/issues/147479
For the most part, this should not change the behavior.
For int_mm, I also removed
```
# TODO: Re-enable eager mode implementation once cuBLAS is fixed
if use_cutlass or use_triton_template(layout, enable_int32=True):
choices = []
```
because I think it is unwanted.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,614,286
|
[Inductor-CPU] Avoid memory allocator lock contention in the GEMM template
|
sanchitintel
|
open
|
[
"open source",
"Stale",
"ciflow/trunk",
"topic: performance",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
## Summary
Use stack allocated buffer in GEMM template, whenever possible, to avoid memory allocator lock contention. It'd probably only save us a few cycles.
Based on a quick glance at the `get_cache_blocking` code, it looks like `Mc_blocks * Mr * Nc_blocks * Nr` wouldn't exceed the size of per-core L2 cache, so it's safe to assume that it'd be smaller than the default per-thread stack size on Linux.
Didn't observe a discernible difference in performance, but can we still land this change to remove some degree of non-determinism?
Thanks!
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,567,278
|
[ca] side-effect free initial trace: GraphTask
|
xmfan
|
closed
|
[
"Merged",
"Reverted",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"module: compiled autograd",
"ci-no-td"
] | 6
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147891
* #147804
* __->__ #147796
* #147242
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,876,549,424
|
DISABLED test_inductor_all_to_all_single (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"oncall: distributed",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2"
] | 16
|
NONE
|
Platforms: inductor, linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_all_to_all_single&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37743029313).
Over the past 3 hours, it has been determined flaky in 16 workflow(s) with 32 failures and 16 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_all_to_all_single`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/test_c10d_functional_native.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/distributed/test_c10d_functional_native.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr @chauhang @penguinwu
| true
|
2,876,545,660
|
Fix bug in async TP handling of "reshape -> scaled mm -> reshape" pattern for float8 row-wise scaling
|
danielvegamyhre
|
closed
|
[
"oncall: distributed",
"release notes: distributed (pipeline)",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Part of https://github.com/pytorch/torchtitan/issues/864
## Summary
While testing torchtitan with float8 training with rowwise scaling + async TP, a [bug](https://github.com/pytorch/torchtitan/issues/864) was discovered. The symptom was the scaling factor dims did not match the dims of the tensor the scales were to be applied to.
My [root cause analysis](https://github.com/pytorch/torchtitan/issues/864#issuecomment-2672465060) determined the reason is that when async TP graph manipulation constructs the `fused_scaled_matmul_reduce_scatter` op, it does not yet handle the "reshape -> scaled mm -> reshape" pattern used in torchao [here](https://github.com/pytorch/ao/blob/ed361ff5c7dd33aba9b4a0da2bd744de5a5debfb/torchao/float8/float8_linear.py#L122-L124) - specifically when row-wise scales are being used.
## TL;DR of root cause
- When a Float8Tensor is reshaped, the scale is reshaped along with it so the dimensions are aligned.
- In the graph manipulation logic of the micropipeline TP post grad pass, the scaled_mm `A tensor` node is referencing the tensor _before_ to the reshape op, but referencing the `A_scale` node _after_ the reshape op.
- To solve this, if a reshape -> scaled mm -> reshape pattern is detected, we can ensure both the tensor and scale used are from _before_ the reshape.
**Note:** the reason we don't use the tensor and scale from _after_ the reshape is because then the `scatter_dim`, which corresponds to the original tensor shape, would now be outdated, and keeping the scatter dim in sync with arbitrary reshapes would be complicated/not feasible. Furthermore, using the tensor / scale from before the reshape ensures the `fused_scaled_matmul_reduce_scatter` keeps the intended `(a,b,c) @ (c,d) = (a,b,d)` shape sequence
## Test plan
- Added new unit tests ensuring "reshape -> scaled mm -> reshape" pattern with row-wise scales is supported.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,544,438
|
context_parallel fails with plain sdpa kernel SDPBackend.MATH
|
githubsgi
|
open
|
[
"oncall: distributed",
"triaged",
"module: sdpa",
"module: context parallel"
] | 7
|
CONTRIBUTOR
|
### 🐛 Describe the bug
torch.distributed.context_parallel fails with plain sdpa kernel with the following stack trace .
```
....../lib/python3.10/site-packages/torch/distributed/tensor/_dispatch.py", line 475, in _try_replicate_spec_for_scalar_tensor
[rank0]: raise RuntimeError(
[rank0]: RuntimeError: aten.add.Tensor: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators!
```
Reproducer code:
```
# torchrun --standalone --nnodes=$N --nproc-per-node=$PPN --rdzv_id=100 --rdzv_endpoint=localhost:29400 torch_distributed_context_parallel.py
import torch
import torch.nn as nn
from dataclasses import dataclass
import os
import torch.distributed as dist
from torch.distributed.pipelining import pipeline, SplitPoint, PipelineStage, ScheduleGPipe
from torch.distributed.device_mesh import init_device_mesh
import contextlib
from typing import Generator, Iterable, List, Optional, Set, Union
import pdb
from torch.nn.attention import sdpa_kernel, SDPBackend
try:
from torch.distributed.tensor.experimental import context_parallel
from torch.distributed.tensor.experimental._attention import set_rotate_method
except ImportError:
print(
f"PyTorch version {torch.__version__} does not include the experimental "
"Context Parallel API. Please update to a newer version."
)
@dataclass
class ModelArgs:
dim: int = 512
n_layers: int = 8
n_heads: int = 8
vocab_size: int = 10000
class Transformer(nn.Module):
def __init__(self, model_args: ModelArgs, device_type='cuda'):
super().__init__()
self.device_type = device_type
self.tok_embeddings = nn.Embedding(model_args.vocab_size, model_args.dim)
# Using a ModuleDict lets us delete layers witout affecting names,
# ensuring checkpoints will correctly save and load.
self.layers = torch.nn.ModuleDict()
with sdpa_kernel(SDPBackend.MATH):
for layer_id in range(model_args.n_layers):
self.layers[str(layer_id)] = nn.TransformerDecoderLayer(model_args.dim, model_args.n_heads)
self.norm = nn.LayerNorm(model_args.dim)
self.output = nn.Linear(model_args.dim, model_args.vocab_size)
@torch.compiler.disable(recursive=True)
def forward(self, tokens: torch.Tensor):
# Handling layers being 'None' at runtime enables easy pipeline splitting
h = self.tok_embeddings(tokens) if self.tok_embeddings else tokens
mask = nn.Transformer.generate_square_subsequent_mask(
tokens.shape[0], device=self.device_type, ) #dtype=tokens.dtype)
with sdpa_kernel(SDPBackend.MATH):
for layer in self.layers.values():
h = layer(h, h, tgt_mask=mask, tgt_is_causal=True, memory_mask=mask, memory_is_causal=True)
h = self.norm(h) if self.norm else h
output = self.output(h).clone() if self.output else h
print ( f"Transformer forward output {output.shape} h {h.shape}")
return output
global rank, device, pp_group, stage_index, num_stages, world_size
def init_distributed(device_type, backend):
global rank, device, pp_group, stage_index, num_stages, world_size
rank = int(os.environ["LOCAL_RANK"])
world_size = int(os.environ["WORLD_SIZE"])
device = torch.device(f"{device_type}:{rank}") #if torch.cuda.is_available() else torch.device("cpu")
dist.init_process_group(backend)
# This group can be a sub-group in the N-D parallel case
pp_group = dist.new_group()
stage_index = rank
num_stages = world_size
def dist_setup(rank=0, world_size=1):
if os.getenv('MASTER_PORT', default=None) :
pass
else:
os.environ['MASTER_PORT'] = '29503'
if not os.getenv('MASTER_ADDR' , default=None) :
os.environ['MASTER_ADDR'] = 'localhost'
if os.getenv('PALS_LOCAL_RANKID', default =None):
os.environ['LOCAL_RANK'] = os.getenv('PALS_LOCAL_RANKID')
if not os.getenv('LOCAL_RANK' , default=None) :
os.environ['LOCAL_RANK'] = "0"
if os.getenv('PMIX_RANK', default =None):
rank = int(os.getenv('PMIX_RANK'))
os.environ['RANK']=str(rank)
if os.getenv('PALS_RANKID', default =None):
rank = int(os.getenv('PALS_RANKID'))
os.environ['RANK']=str(rank)
if os.getenv('RANK', default=None):
rank = os.getenv('RANK')
if os.getenv('WORLD_SIZE', default =None):
world_size = int(os.getenv('WORLD_SIZE'))
else:
os.environ['WORLD_SIZE'] = str(world_size)
print(f" RANK {os.environ['RANK']} LOCAL_RANK {os.environ['LOCAL_RANK']} WORLD_SIZE {os.environ['WORLD_SIZE']} ")
def get_train_context(enable_loss_parallel: bool, enable_compiled_autograd: bool):
@contextlib.contextmanager
def context(cp_context: Optional[Generator[None, None, None]] = None):
with contextlib.ExitStack() as stack:
if enable_loss_parallel:
stack.enter_context(torch.distributed.tensor.parallel.loss_parallel())
if enable_compiled_autograd:
stack.enter_context(
torch._dynamo.utils.maybe_enable_compiled_autograd(True)
)
if cp_context is not None:
stack.enter_context(cp_context)
yield
return context
def cp_run():
global rank, device, pp_group, stage_index, num_stages, world_size
lif torch.cuda.is_available():
backend = 'nccl'
device_type = 'cuda'
dist_setup()
init_distributed(device_type, backend)
model_args = ModelArgs()
model = Transformer(model_args, device_type=device_type)
def tokenwise_loss_fn(outputs, targets):
loss_fn = nn.CrossEntropyLoss()
outputs = outputs.reshape(-1, model_args.vocab_size)
targets = targets.reshape(-1)
print ( f"tokenwise_loss_fn outputs {outputs.shape} targets {targets.shape}")
return loss_fn(outputs, targets)
# Dummy data
batch_size = 64
embed_dim=500
x = torch.ones(batch_size, embed_dim, dtype=torch.long)
y = torch.randint(0, model_args.vocab_size, (batch_size, embed_dim), dtype=torch.long)
model.to(device)
x = x.to(device)
y = y.to(device)
world_mesh = init_device_mesh(
device_type,
mesh_shape=(world_size, ),
mesh_dim_names=( "cp",),
)
cp_mesh = world_mesh["cp"]
world_mesh["cp"]._flatten(mesh_dim_name="dp_shard_cp")
context_parallel_ctx = context_parallel(
mesh=world_mesh["cp"],
buffers=[x, y,],
buffer_seq_dims=[1, 1, ], # shard on seq dimension
no_restore_buffers={x,y}, # don't restore
#cp_rotate_method="allgather", # shard rotation
)
train_context = get_train_context(True, False)
with train_context(context_parallel_ctx): # enable Context Parallel
pred = model(x)
loss = tokenwise_loss_fn(pred, y)
del pred
loss.backward()
print ( f"loss {loss}")
if __name__ == "__main__":
cp_run()
```
### Versions
nightly 2.7
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,876,525,022
|
Delete unused conda-aws-upload environment
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
As this environment only contains keys for Anaconda uploads
| true
|
2,876,502,656
|
[ROCm] Remove benign warning about missing amdgpu.ids
|
ethanwee1
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Fixes #144203.
We build a custom libdrm when preparing our docker image. We attempt to locate the amdgpu.ids file relative to the python binary, but this is not possible for venv installs of pytorch when the python binary is a symlink. Not finding amdgpu.ids causes `torch.cuda.get_device_name()` to return "AMD Radeon Graphics" as a generic name instead of something specific such as "AMD Instinct MI250X / MI250". The libdrm warning is noisy, so we are removing it.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,876,500,475
|
Remove unused rand call if not fallback to eager for rand
|
henryhu6
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 26
|
CONTRIBUTOR
|
Fixes #147171
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,491,641
|
[ci][anaconda] Remove conda from linter docker images
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Remove conda usage from the linter docker images
Handles part of https://github.com/pytorch/pytorch/issues/148110
| true
|
2,876,461,365
|
Make record/storage alignment in torch.save configurable
|
mikaylagawarecki
|
closed
|
[
"oncall: jit",
"Merged",
"release notes: jit",
"release notes: python_frontend"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148018
* __->__ #147788
* #147787
* #147786
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,876,460,991
|
Add information about checkpoint offset to untyped storages when torch.load under FakeTensorMode
|
mikaylagawarecki
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148018
* #147788
* __->__ #147787
* #147786
| true
|
2,876,460,864
|
Allow torch.load under FakeTensorMode to load FakeTensors with correct devices (for plain Tensors)
|
mikaylagawarecki
|
closed
|
[
"Merged",
"release notes: python_frontend",
"topic: bug fixes"
] | 3
|
CONTRIBUTOR
|
This only fixes _rebuild_tensor_v2 and _rebuild_tensor_v3
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148018
* #147788
* #147787
* __->__ #147786
| true
|
2,876,421,875
|
torch.utils._content_store: fix error in hash_storage on XPU
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"ciflow/xpu"
] | 13
|
COLLABORATOR
|
See https://github.com/pytorch/pytorch/actions/runs/13508573465/job/37745227468 for an example error. This is triggering after the merge of #147541, which enabled Dynamo compilation on XPU.
| true
|
2,876,416,728
|
[Inductor][Optimus] Fix a corner case in split cat aten pass
|
mengluy0125
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"inductor_pattern_match"
] | 5
|
CONTRIBUTOR
|
Summary: We need to further check the input of the cat to make sure all of them are from the same split node.
Test Plan:
# unit test
```
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:split_cat_fx_aten_passes -- test_split_cat_post_grad
```
Buck UI: https://www.internalfb.com/buck2/c875cbdd-5374-46cf-811c-45f91cf6ba3e
Test UI: https://www.internalfb.com/intern/testinfra/testrun/10977524161964655
Network: Up: 64KiB Down: 27KiB (reSessionID-2e5915cb-4894-48f6-ab1c-3981adb42dab)
Executing actions. Remaining 0/3 1.5s exec time total
Command: test. Finished 2 local
Time elapsed: 2:52.1s
Tests finished: Pass 2. Fail 0. Fatal 0. Skip 0. Build failure 0
# E2E
before
aps-recgpt_ig_emb_pt2_comment_out-30c4d5127e
tlparse:
https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/aps-recgpt_ig_emb_pt2_comment_out-30c4d5127e/attempt_0/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100
after
aps-recgpt_ig_emb_pt2_comment_out-c03f74e353
Differential Revision: D70132209
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,414,849
|
[CacheBench] Add hf_T5 llama moco to cachebench
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147783
* #147782
* #147781
* #147780
* #147688
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,876,414,428
|
[CacheBench] Add huggingface
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: benchmark",
"release notes: releng",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147783
* __->__ #147782
* #147781
* #147780
* #147688
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,876,414,299
|
[CacheBench] Separate dynamic into its own option
|
oulgen
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147783
* #147782
* __->__ #147781
* #147780
* #147688
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,876,414,188
|
[CacheBench] Add repeat option so that we can have more accurate cache results
|
oulgen
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147783
* #147782
* #147781
* __->__ #147780
* #147688
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,876,412,999
|
[dynamic shapes][export] ignore when real-tensor fallback fails
|
pianpwk
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Summary: uninspired solution to https://github.com/pytorch/pytorch/issues/147402
Test Plan: test_draft_export
Differential Revision: D70132269
| true
|
2,876,335,474
|
[ROCm] CK Memory-Efficient Attention (attention bias support)
|
alugorey
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks"
] | 18
|
CONTRIBUTOR
|
Implements CK as the backend for memory efficient attention with a couple caveats:
- Still enabled via `torch.backends.cuda.preferred_rocm_fa_library("ck")
- Does NOT support Nested Tensors
Using the mem_eff path allows us to use attention bias with a CK sdpa backend
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @albanD
| true
|
2,876,328,213
|
Decorators like `torch.compiler.allow_in_graph` doesn't account for id reuse
|
StrongerXi
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Context: https://github.com/pytorch/pytorch/pull/146367/files#r1964644166
Repro:
```python
import torch
@torch.compiler.allow_in_graph
def f(x):
return x + 1
del f
def g(x):
return x + 2
@torch.compile(fullgraph=True, backend="eager")
def fn(x):
return g(x)
fn(torch.ones(1))
```
Run it with `TORCH_LOGS="graph_code"`:
```
output_graph.py:1385] [0/0] [__graph_code] TRACED GRAPH
output_graph.py:1385] [0/0] [__graph_code] ===== __compiled_fn_1 =====
output_graph.py:1385] [0/0] [__graph_code] /Users/ryanguo99/Documents/work/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
output_graph.py:1385] [0/0] [__graph_code] def forward(self, L_x_: "f32[1][1]cpu"):
output_graph.py:1385] [0/0] [__graph_code] l_x_ = L_x_
output_graph.py:1385] [0/0] [__graph_code]
output_graph.py:1385] [0/0] [__graph_code] # File: /Users/ryanguo99/Documents/work/scratch/allow-in-graph.py:14 in fn, code: return g(x)
output_graph.py:1385] [0/0] [__graph_code] g: "f32[1][1]cpu" = __main___g(l_x_); l_x_ = None
output_graph.py:1385] [0/0] [__graph_code] return (g,)
```
Commenting out `del f` and rerun:
```
output_graph.py:1385] [0/0] [__graph_code] TRACED GRAPH
output_graph.py:1385] [0/0] [__graph_code] ===== __compiled_fn_1 =====
output_graph.py:1385] [0/0] [__graph_code] /Users/ryanguo99/Documents/work/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
output_graph.py:1385] [0/0] [__graph_code] def forward(self, L_x_: "f32[1][1]cpu"):
output_graph.py:1385] [0/0] [__graph_code] l_x_ = L_x_
output_graph.py:1385] [0/0] [__graph_code]
output_graph.py:1385] [0/0] [__graph_code] # File: /Users/ryanguo99/Documents/work/scratch/allow-in-graph.py:9 in g, code: return x + 2
output_graph.py:1385] [0/0] [__graph_code] add: "f32[1][1]cpu" = l_x_ + 2; l_x_ = None
output_graph.py:1385] [0/0] [__graph_code] return (add,)
```
### Error logs
_No response_
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,876,305,269
|
[Dynamo] Small issue in `SETUP_WITH` implementation
|
guilhermeleobas
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 4
|
COLLABORATOR
|
### 🐛 Describe the bug
The CPython [docs](test_propagate_exception_inside_ctx_manager) for `SETUP_WITH` state:
> This opcode performs several operations before a with block starts. First, it loads `__exit__()` from the context manager and pushes it onto the stack for later use by `WITH_EXCEPT_START`. Then, `__enter__()` is called, and a `finally` block pointing to `delta` is pushed. Finally, the result of calling the `__enter__()` method is pushed onto the stack.
However, Dynamo pushes `__exit__()` onto the stack after creating the block stack. This ordering has consequences because the stack's length affects exception unwinding. The correct approach is to follow the documentation, but applying it causes Dynamo to crash if a graph break occurs.
https://github.com/pytorch/pytorch/blob/22fae0d948ac14c72b510fafc2283072d744dff9/torch/_dynamo/symbolic_convert.py#L2597-L2606
----------
## Patch and reproducer
```diff
diff --git a/torch/_dynamo/symbolic_convert.py b/torch/_dynamo/symbolic_convert.py
index fdb1ee04c86..d2b3f686409 100644
--- a/torch/_dynamo/symbolic_convert.py
+++ b/torch/_dynamo/symbolic_convert.py
@@ -2594,6 +2594,8 @@ class InstructionTranslatorBase(
else:
target = inst.target
+ self.push(exit)
+
if target:
if isinstance(self, InstructionTranslator):
self.block_stack.append(
@@ -2602,7 +2604,6 @@ class InstructionTranslatorBase(
else:
self.block_stack.append(BlockStackEntry(inst, target, len(self.stack)))
- self.push(exit)
self.push(ctx.enter(self))
def append_prefix_inst(self, inst):
```
```bash
$ pytest test/dynamo/test_ctx_manager.py --tb=short -rs -sv -k test_torch_profiler
...
torch/_dynamo/resume_execution.py:306: in generate
return cls.generate_based_on_original_code_object(
torch/_dynamo/resume_execution.py:509: in generate_based_on_original_code_object
transform_code_object(code, find_new_offset)
torch/_dynamo/bytecode_transformation.py:1418: in transform_code_object
transformations(instructions, code_options)
torch/_dynamo/resume_execution.py:501: in find_new_offset
(new_target,) = (
E torch._dynamo.exc.InternalTorchDynamoError: ValueError: not enough values to unpack (expected 1, got 0)
E
E from user code:
E File "/home/guilhermeleobas/git/pytorch/test/dynamo/test_ctx_manager.py", line 198, in torch_dynamo_resume_in_fn_at_171
E opt_fn = torch.compile(fn, backend=cnts)
E
E Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
E
E
E To execute this test, run the following from the base repo dir:
E python test/dynamo/test_ctx_manager.py CtxManagerTests.test_torch_profiler
E
E This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @anijain2305
### Versions
main branch
| true
|
2,876,303,044
|
cpp_builder: unbreak clang++ detection
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Fixes an issue where `_is_gcc` would match on `clang++` due to the string ending with `g++`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,876,300,828
|
Test
|
hashupdatebot
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
NONE
|
Need to see what the conclusion of the mangled workflow is
| true
|
2,876,293,200
|
[cuda] Add new gamma beta backwards kernel
|
ahmadsharif1
|
open
|
[
"Stale",
"release notes: nn"
] | 2
|
CONTRIBUTOR
|
Context:
Prior to this PR we had 3 non-ROCM CUDA kernels to handle GammaBeta backwards pass:
1. For small M
2. 32x32 faster kernel for shapes that were divisible by 32 for both M and N
3. All other cases
This approach had several weaknesses:
1. For non-32x32 case, the performance was slow because we were not using warp shuffles there
2. For small M we were not doing coalesced loads so performance was poor in that case (though the total runtime is quite small in those cases so perhaps it doesn't matter much)
3. For large M and small N, we were only using few SMs in the GPU because we were only exploiting parallelism in the `N` dimension, not in the `M` dimension
4. We had to maintain 3 different kernels.
This PR:
1. Adds a single templatized kernel that can technically replace all 3 kernels and get equal or faster performance. The only reason I left out the simple kernel is because `USE_ROCM` case was using that and I couldn't test my kernel with `USE_ROCM`
2. Depending on template parameters, this kernel can either fully reduce the grad values or partially reduce them. In the partial reduction case, a second kernel is needed to fully reduce them.
3. For the large M and small N case, we can launch the partial reduction kernel followed by a `.sum()` to do the full reduction. The advantage is the partial reduction can fully utilize all SMs on the GPU as we parallelize across the `M` dimension. This can lead to pretty dramatic performance gains -- for instance, I saw 10x+ performance improvement for M=7e6 and N=32 (which was from a real model).
Full performance results are shown below on my H100:

| true
|
2,876,271,479
|
torch._check doesn't work for .item() then select
|
ydwu4
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```python
import torch
# Example tensor
A = torch.tensor([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
# Scalar tensor indicating the index
index = torch.tensor(1, dtype=torch.int64)
@torch.compile(fullgraph=True, dynamic=True)
def f(x, index):
idx = index.item()
torch._check(idx >= 0)
torch._check(idx < x.size(0))
return x[idx]
torch._dynamo.config.capture_scalar_outputs = True
f(A, index)
```
Get the following err message:
```
Traceback (most recent call last):
File "/data/users/yidi/pytorch/test.py", line 22, in <module>
f(A, index)
File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 589, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1515, in _call_user_compiler
raise BackendCompilerFailed(
File "/data/users/yidi/pytorch/torch/_dynamo/output_graph.py", line 1490, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/users/yidi/pytorch/torch/_dynamo/repro/after_dynamo.py", line 150, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/data/users/yidi/pytorch/torch/__init__.py", line 2339, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/data/users/yidi/pytorch/torch/_inductor/compile_fx.py", line 2164, in compile_fx
return aot_autograd(
File "/data/users/yidi/pytorch/torch/_dynamo/backends/common.py", line 101, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1158, in aot_module_simplified
compiled_fn = AOTAutogradCache.load(
File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/autograd_cache.py", line 779, in load
compiled_fn = dispatch_and_compile()
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 1143, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 570, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/data/users/yidi/pytorch/torch/_functorch/aot_autograd.py", line 671, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
flat_f_outs = f(*flat_f_args)
File "/data/users/yidi/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 899, in functional_call
out = PropagateUnbackedSymInts(mod).run(
File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 171, in run
self.env[node] = self.run_node(node)
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 7107, in run_node
result = super().run_node(n)
File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 236, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/data/users/yidi/pytorch/torch/fx/interpreter.py", line 316, in call_function
return target(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/functional_tensor.py", line 528, in __torch_dispatch__
outs_unwrapped = func._op_dk(
File "/data/users/yidi/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1269, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1810, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 1380, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/data/users/yidi/pytorch/torch/_subclasses/fake_tensor.py", line 2404, in _dispatch_impl
r = func(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_ops.py", line 756, in __call__
return self._op(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_meta_registrations.py", line 5271, in meta_select
guard_size_oblivious(-index > size) or guard_size_oblivious(index >= size)
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 408, in guard_size_oblivious
return expr.node.guard_size_oblivious("", 0)
File "/data/users/yidi/pytorch/torch/fx/experimental/sym_node.py", line 575, in guard_size_oblivious
r = self.shape_env.evaluate_expr(
File "/data/users/yidi/pytorch/torch/fx/experimental/recording.py", line 263, in wrapper
return retlog(fn(*args, **kwargs))
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6600, in evaluate_expr
return self._evaluate_expr(
File "/data/users/yidi/pytorch/torch/fx/experimental/symbolic_shapes.py", line 6820, in _evaluate_expr
raise self._make_data_dependent_error(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
GuardOnDataDependentSymNode: Could not guard on data-dependent expression -u0 > 3 (unhinted: -u0 > s1). (Size-like symbols: none)
Caused by: (_meta_registrations.py:5271 in meta_select)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
While executing %select : [num_users=1] = call_function[target=torch.select](args = (%l_x_, 0, %item), kwargs = {})
GraphModule: class GraphModule(torch.nn.Module):
def forward(self, L_index_: "i64[][]", s1: "Sym(s1)", L_x_: "i64[s1, s1][s1, 1]"):
l_index_ = L_index_
l_x_ = L_x_
# File: /data/users/yidi/pytorch/test.py:15 in f, code: idx = index.item()
item: "Sym(s0)" = l_index_.item(); l_index_ = None
# File: /data/users/yidi/pytorch/test.py:19 in f, code: return x[idx]
select: "i64[s1][1]" = torch.select(l_x_, 0, item); l_x_ = item = None
return (select,)
Original traceback:
File "/data/users/yidi/pytorch/test.py", line 19, in f
return x[idx]
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
on master
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,876,259,096
|
Fix bug in FSDP wrapped module with zero argument
|
mori360
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/147531
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,876,216,176
|
add pt2 testing for torch.float8_e8m0fnu
|
vkuzo
|
open
|
[
"Stale",
"release notes: quantization",
"fx"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147770
Summary:
Adds PT2 enablement tests for `torch.float8_e8m0fnu`, skipping tests as needed for the functionality which does not work yet:
* displaying e8m0 in TORCH_LOGS output: fixed in this PR
* uint8 -> view as e8m0 -> view as uint8 in torchinductor: already works, added a test
* uint8 -> view as e8m0 -> return in torchinductor: filed https://github.com/pytorch/pytorch/issues/147873
* float32|bfloat16 -> cast to e8m0 -> cast to float32|bfloat16: https://github.com/pytorch/pytorch/issues/147875
Test Plan: CI
TODO
Reviewers:
Subscribers:
Tasks:
Tags:
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,876,183,349
|
DISABLED test_custom_hsdp_all_reduce_hook (__main__.TestHSDPWithCustomHook)
|
jithunnair-amd
|
closed
|
[
"oncall: distributed",
"module: rocm",
"skipped"
] | 3
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Fdistributed%2F_composable%2Ffsdp%2Ftest_fully_shard_init.py%3A%3ATestHSDPWithCustomHook%3A%3Atest_custom_hsdp_all_reduce_hook'%22%5D)).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,876,178,160
|
[SDPA] Respect `sdpa_kernel`'s `priority_order` setting in `torch.compile`
|
eqy
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"dynamo-ctx-manager",
"module: sdpa"
] | 6
|
COLLABORATOR
|
[https://github.com/pytorch/pytorch/pull/140467](https://github.com/pytorch/pytorch/pull/140467) added the option to specify a priority order for SDPA but the `torch.compile` path silently ignored this setting as I wasn't aware of the separate context manager handling on `torch.compile`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,876,173,149
|
DISABLED test_custom_hook_custom_stream (__main__.TestHSDPWithCustomHook)
|
jithunnair-amd
|
closed
|
[
"oncall: distributed",
"module: rocm",
"skipped"
] | 2
|
COLLABORATOR
|
Platforms: rocm
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Fdistributed%2F_composable%2Ffsdp%2Ftest_fully_shard_init.py%3A%3ATestHSDPWithCustomHook%3A%3Atest_custom_hook_custom_stream'%22%5D)).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,876,142,230
|
[Inductor-CPU] Memory allocator lock contention is slowing down templated GEMMs
|
sanchitintel
|
closed
|
[
"module: performance",
"module: cpu",
"oncall: cpu inductor"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
# Problem
CPP GEMM template creates some per-thread local accumulation buffers within an OpenMP parallel region.
All threads contend with each other for memory allocator locks, since even tcmalloc is not completely lock-free.
The perf impact may be significant for some input shapes.
e.g. For an `M=1, K=4096, N=4096` GEMM with 48 threads, with compute dtype & accum dtype being FP16 (special case with lower accuracy but better performance for small M on machines that support AVX512_FP16 ISA), 128 local accum buffers of size 64 bytes each were declared across 48 threads (so 48 threads contended for memory allocator locks twice, and then 32 threads contended with each other for memory allocator locks) on 48 physical cores of an Intel(R) Xeon(R) Platinum 8468H. tcmalloc & Intel OpenMP were preloaded.
Using per-thread stack allocated buffers in this case resulted in a **40% speedup** (ratio of latencies of before/after case).
However, stack allocation isn't necessary to prevent lock-contention (and may not always be feasible due to per-thread stack size limit). Allocating buffers outside OpenMP parallel regions & letting worker threads use chunks of them should also work well.
# Solution
Either `1` or both `1` and `2` below:
1. Allocate heap memory buffers outside the parallel region, and then let worker threads use chunks of them
2. (and maybe) if per-thread buffers are likely to be small enough to not cause stack overflow, try to use stack allocation.
### Versions
The issue also manifests on the current main branch, but the perf difference of the specific example provided above may not be representative of the GEMMs currently supported by the main branch - I haven't checked the precise perf-impact for the main branch, but will do so ASAP, since I encountered this issue while reusing (copy-pasting) the same local buffer-allocation routine invocation as in the main branch's `CPPGemmTemplate`.
cc @jgong5 @leslie-fang-intel @chunyuan-w
| true
|
2,876,128,894
|
[FlexAttention] Improve error msg for embedding < 16
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 4
|
CONTRIBUTOR
|
flex_attention uses tl.dot, which [does not support embedding < 16](https://github.com/triton-lang/triton/issues/2266) on input shapes. This PR adds explicit error message for users who are prototyping with small tensors.
Fixes #147701
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @Chillee @drisspg @yanboliang
| true
|
2,876,114,035
|
[c10d] Restrict use condition of NCCL mem pool
|
kwen2501
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147764
Add check to see if CUDA driver support multicast, as does in Symmetric Memory.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,876,113,378
|
[cuda] Added a correctness test for layernorm backwards
|
ahmadsharif1
|
open
|
[
"module: mkldnn",
"Stale",
"release notes: nn",
"topic: not user facing",
"ciflow/linux-aarch64"
] | 3
|
CONTRIBUTOR
|
My goal is to improve the performance of the layernorm CUDA backwards pass. That will be done in a future PR.
This PR is the first step -- I added a test for making sure the layernorm CUDA backwards pass produces accurate results.
This test passes on the baseline, which means the current implementation of the backward pass of the layernorm on CUDA produces values that are close to the CPU implementation.
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,876,108,118
|
[inductor][user triton] Handle scf.yield more accurately
|
davidberard98
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor",
"module: user triton"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147762
**TL;DR**: Previously, the mutation analysis for scf.if/scf.for would bundle all the scf.yield arguments into a single op (the scf.yield), such that a mutation on any returned value from the scf.if/scf.for would register as a mutation to _all_ of the scf.yield args. To fix this, this PR artificially introduces a new scf.yield op for each of the scf.yield args.
**Context**: The relevant kernel is something like this one (added as a test in test_triton_kernels.py)
```python
@triton.jit
def branch_with_multiple_yield_args(
in_ptr0,
in_ptr1,
out_ptr,
conditional_ptr,
n_elements,
BLOCK_SIZE: "tl.constexpr",
):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
conditional = tl.load(conditional_ptr)
if conditional:
in0 = in_ptr0 + 1
in1 = in_ptr1 + 1
out = out_ptr + 1
else:
in0 = in_ptr0
in1 = in_ptr1
out = out_ptr
x = tl.load(in0 + offsets, mask=mask)
y = tl.load(in1 + offsets, mask=mask)
tl.store(out + offsets, x + y, mask=mask)
```
The mutation analysis starts with the `tl.store` - and then does a DFS backwards towards the parameters. When a new op is encountered in the DFS, the analysis pass recurses on the op's arguments.
The if branch gets converted to TTIR like this:
```mlir
%21:3 = scf.if %20 -> (!tt.ptr<f32>, !tt.ptr<f32>, !tt.ptr<f32>) {
...
scf.yield %31, %32, %33 : !tt.ptr<f32>, !tt.ptr<f32>, !tt.ptr<f32> loc(#loc10)
} else {
scf.yield %arg0, %arg1, %arg2 : !tt.ptr<f32>, !tt.ptr<f32>, !tt.ptr<f32> loc(#loc11)
} loc(#loc7)
```
and so the "source" op of the `out` variable is marked as the `scf.yield` op - and then all of the arguments to `scf.yield` are marked as mutable (including arg0, arg1, and arg2 - only one of which is actually mutated).
**This PR** we duplicate the `scf.yield` to add one `scf.yield` per return value. That way we avoid marking all the returns from the scf.if/scf.for as mutated when only some are.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @oulgen
Differential Revision: [D70118202](https://our.internmc.facebook.com/intern/diff/D70118202)
| true
|
2,876,093,000
|
[ROCm] Add support for gfx1102 arch to wheel builds.
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 5
|
COLLABORATOR
|
[gfx1102 is not officially supported](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html) but most ROCm libs have gfx1102 code objects available since ROCm 5.5. Now that we're using `--offload-compress` we can fit another gfx target.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,876,053,754
|
[logging] Add toplevel dynamo_compile / tlparse logging for AOTI
|
masnesral
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
This adds the proper context managers in `compile_fx_aot` such that we get:
1) A toplevel chromium event (i.e., tlparse)
2) A single `dynamo_compile` log entry
Test Plan:
Before:
* Scuba (we only log the dynamo event): https://fburl.com/scuba/dynamo_compile/sandbox/gaqowzrd
* Perfetto trace: https://fburl.com/vol7r6w1
After:
* Scuba (we log the dynamo _and_ compile_fx_aot event): https://fburl.com/scuba/dynamo_compile/sandbox/cx2we8w8
* Perfetto trace (click on the toplevel event to see the additional metadata): https://fburl.com/sziy40r9
Differential Revision: D70113859
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,875,986,300
|
Add sparse tensors constructed via legacy constructor to _sparse_tensors_to_validate
|
mikaylagawarecki
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This is a redo of https://github.com/pytorch/pytorch/pull/147408 which added validation at the end of the legacy constructor calls.
The reason why I didn't land that was because in `legacy_load`, constructor would be called before storages of indices/values are set. So the tensor would not actually be validated.
Technically, torch.sparse.{Foo}Tensor should not even be called by our rebuild process since afaict this was the first PR that added support for sparse tensor serialization https://github.com/pytorch/pytorch/pull/27062 and it already uses `_rebuild_sparse_tensor` (which would add the rebuilt tensor to the list to validate), but torch.sparse.FooTensor is allowlisted
This PR adds tensors constructed as such to the list to validate at the end of torch.load.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147759
| true
|
2,875,953,355
|
[DCP][OSS] Rank local checkpointing in DCP without collectives
|
saumishr
|
open
|
[
"oncall: distributed",
"fb-exported",
"ciflow/trunk",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 13
|
CONTRIBUTOR
|
Summary:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.
Differential Revision: D70112642
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,875,952,606
|
compilation error on SequenceParallel'ed Dropout
|
bonpyt
|
open
|
[
"oncall: distributed",
"triaged",
"tensor subclass",
"oncall: pt2",
"module: dtensor"
] | 17
|
NONE
|
### 🐛 Describe the bug
Trying to compile a model with `Dropout` parallelised with `SequenceParallel` fails:
```
import torch
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.tensor import Shard, DTensor
from torch import nn
from torch.distributed import get_rank
from torch.distributed._tensor import Replicate, Shard
from torch.distributed.device_mesh import DeviceMesh
from torch.distributed.tensor import DTensor
from torch.distributed.tensor.parallel import (
ColwiseParallel,
PrepareModuleInput,
PrepareModuleOutput,
RowwiseParallel,
SequenceParallel,
parallelize_module,
)
class Model(nn.Module):
def __init__(self, n):
super().__init__()
self.dropout = nn.Dropout()
def forward(self, x):
x = self.dropout(x)
return x
def main():
mesh = init_device_mesh("cuda", (2,))
device = torch.device(f"cuda:{get_rank()}")
torch.set_default_device(device)
dim = 4
model = Model(dim)
if True:
parallelize_module(
model,
mesh,
{
"dropout": SequenceParallel(),
},
)
if True:
model = torch.compile(model)
dt = torch.randn(2, dim, dim)
l = model(dt)
print(l)
if __name__ == "__main__":
main()
```
Fails with this error:
```
[rank1]: File "python3.12/site-packages/torch/distributed/tensor/_random.py", line 186, in _distribute_region
[rank1]: old_offset = self.get_offset("parallel-rng")
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "python3.12/site-packages/torch/distributed/tensor/_random.py", line 204, in get_offset
[rank1]: offset_tensor = (self.rng_states[name])[8:].view(dtype=torch.int64)
[rank1]: ~~~~~~~~~~~~~~~~~~~~~~~^^^^
[rank1]: File "python3.12/site-packages/torch/utils/_stats.py", line 21, in wrapper
[rank1]: return fn(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^
[rank1]: File "python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
[rank1]: return self.dispatch(func, types, args, kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1816, in dispatch
[rank1]: return self._cached_dispatch_impl(func, types, args, kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 1386, in _cached_dispatch_impl
[rank1]: output = self._dispatch_impl(func, types, args, kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 2067, in _dispatch_impl
[rank1]: (flat_args, flat_arg_fake_tensors) = self.validate_and_convert_non_fake_tensors(
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 2465, in validate_and_convert_non_fake_tensors
[rank1]: validated_args = [validate(a) for a in flat_args]
[rank1]: ^^^^^^^^^^^
[rank1]: File "python3.12/site-packages/torch/_subclasses/fake_tensor.py", line 2453, in validate
[rank1]: raise AssertionError(
[rank1]: torch._dynamo.exc.TorchRuntimeError: Failed running call_function <function dropout at 0x7f686830ab60>(*(DTensor(local_tensor=FakeTensor(..., device='cuda:1', size=(32, 4, 16)), device_mesh=DeviceMesh('cuda', [0, 1]), placements=(Shard(dim=1),)), 0.5, True, False), **{}):
[rank1]: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. Found in aten.slice.Tensor(tensor([...], size=(16,), dtype=torch.uint8), 0, 8, 9223372036854775807)
```
Disabling either compilation or parallelisation works.
Incidentally, the [SequenceParallel documentation](https://pytorch.org/docs/stable/distributed.tensor.parallel.html#torch.distributed.tensor.parallel.SequenceParallel) mentions that `SequenceParallel` supports `Dropout`:
> SequenceParallel replicates a compatible nn.Module parameters and runs the sharded computation with input sharded on the sequence dimension. This currently supports nn.LayerNorm, nn.Dropout, and the [RMSNorm python implementation](https://github.com/facebookresearch/llama/blob/main/llama/model.py#L34)
However the [docstring](https://github.com/pytorch/pytorch/blob/1eba9b3aa3c43f86f4a2c807ac8e12c4a7767340/torch/distributed/tensor/parallel/style.py#L321) and [comments](https://github.com/pytorch/pytorch/blob/1eba9b3aa3c43f86f4a2c807ac8e12c4a7767340/torch/distributed/tensor/parallel/style.py#L336) only mention `LayerNorm` and `RMSNorm`:
> SequenceParallel style assumes ones initialization if there are weights in the nn.Module (i.e. ``nn.LayerNorm`` or ``RMSNorm``, and they by default have ones initialization).
So the level of support for `Dropout` is not quite clear.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.12.8 (main, Dec 4 2024, 08:54:13) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-116-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H200
GPU 1: NVIDIA H200
GPU 2: NVIDIA H200
GPU 3: NVIDIA H200
GPU 4: NVIDIA H200
GPU 5: NVIDIA H200
GPU 6: NVIDIA H200
GPU 7: NVIDIA H200
Nvidia driver version: 535.216.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 207
Model name: INTEL(R) XEON(R) PLATINUM 8568Y+
Stepping: 2
CPU MHz: 2300.000
BogoMIPS: 4600.00
L1d cache: 4.5 MiB
L1i cache: 3 MiB
L2 cache: 192 MiB
L3 cache: 600 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.6.0
[pip3] numpy==2.0.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.6.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchmetrics==1.6.0
[pip3] triton==3.2.0
[pip3] tritonclient==2.54.0
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @ezyang @albanD @chauhang @penguinwu @tianyu-l @XilunWu
| true
|
2,875,883,996
|
`torch.compile(flex_attention, dynamic=True)` fails with `LoweringException`
|
pzelasko
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
NONE
|
### 🐛 Describe the bug
Minimal snippet for repro:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
flex_attention = torch.compile(flex_attention, dynamic=True)
B, T, H, C = 4, 51, 8, 128
x = torch.randn(B, H, T, C, device="cuda")
flex_attention(x, x, x)
```
Error:
```
Traceback (most recent call last):
File "/home/pzelasko/exp/open_asr_leaderboard/nemo_asr/repro.py", line 16, in <module>
flex_attention(x, x, x)#, score_mod=padding_mask_score_mod)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst) File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return self.output.compile_subgraph(
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 203, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 489, in __call__
return self.compiler_fn(gm, example_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1741, in fw_compiler_base
return inner_compile(
^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 569, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py", line 102, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 685, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile( ^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1129, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 979, in codegen_and_compile
graph.run(*example_inputs)
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 855, in run
return super().run(*args)
^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1496, in run_node
result = super().run_node(n)
^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1143, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/graph.py", line 1133, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/lowering.py", line 409, in wrapped
out = decomp_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/kernel/flex_attention.py", line 1096, in flex_attention
return create_flex_decoding_kernel(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/kernel/flex_decoding.py", line 423, in create_flex_decoding_kernel
kernel_options.setdefault("SPLIT_KV", get_split_k(B, Hkv, seq_len_kv))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/torch/_inductor/kernel/flex_decoding.py", line 301, in get_split_k
split_k = max(split_k, 1)
^^^^^^^^^^^^^^^
File "/home/pzelasko/miniconda3/envs/pytorch26/lib/python3.12/site-packages/sympy/core/relational.py", line 516, in __bool__
raise TypeError("cannot determine truth value of Relational")
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: TypeError: cannot determine truth value of Relational
target: flex_attention
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg2_1', layout=FixedLayout('cuda:0', torch.float32, size=[s0, 8, s2, 128], stride=[1024*s2, 128*s2, 128, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg2_1', layout=FixedLayout('cuda:0', torch.float32, size=[s0, 8, s2, 128], stride=[1024*s2, 128*s2, 128, 1]))
))
args[2]: TensorBox(StorageBox(
InputBuffer(name='arg2_1', layout=FixedLayout('cuda:0', torch.float32, size=[s0, 8, s2, 128], stride=[1024*s2, 128*s2, 128, 1]))
))
args[3]: Subgraph(name='sdpa_score0', graph_module=<lambda>(), graph=None)
args[4]: (1, 1, TensorBox(StorageBox(
ComputedBuffer(name='buf4', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1], stride=[1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x7fb421e87ba0>, ranges=[1, 1, 1]))
)), TensorBox(StorageBox(
ComputedBuffer(name='buf5', layout=FlexibleLayout('cuda:0', torch.int32, size=[1, 1, 1, 1], stride=[1, 1, 1, 1]), data=Pointwise(device=device(type='cuda', index=0), dtype=torch.int32, inner_fn=<function _full.<locals>.inner_fn at 0x7fb421e8ee80>, ranges=[1, 1, 1, 1]))
)), None, None, TensorBox(StorageBox(
Pointwise(
'cuda',
torch.int32,
def inner_fn(index):
_, _, _ = index
tmp0 = ops.load(buf0, 0)
tmp1 = ops.to_dtype(tmp0, torch.int64, src_dtype=torch.int32)
tmp2 = ops.to_dtype(tmp1, torch.int32, src_dtype=torch.int64)
return tmp2
,
ranges=[1, 1, 1],
origin_node=convert_element_type,
origins=OrderedSet([convert_element_type, sum_1])
)
)), TensorBox(StorageBox(
Pointwise(
'cuda',
torch.int32,
def inner_fn(index):
_, _, _, _ = index
tmp0 = ops.index_expr(0, dtype=torch.int16)
tmp1 = ops.to_dtype(tmp0, torch.int64, src_dtype=torch.int16)
tmp2 = ops.to_dtype(tmp1, torch.int32, src_dtype=torch.int64)
return tmp2
,
ranges=[1, 1, 1, 1],
origin_node=convert_element_type_1,
origins=OrderedSet([sort, convert_element_type_1])
)
)), None, None, 1073741824, 1073741824, Subgraph(name='sdpa_mask0', graph_module=<lambda>(), graph=None))
args[5]: 0.08838834764831843
args[6]: {'PRESCALE_QK': False, 'ROWS_GUARANTEED_SAFE': False, 'BLOCKS_ARE_CONTIGUOUS': False, 'OUTPUT_LOGSUMEXP': True}
args[7]: ()
args[8]: ()
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64) [55/1809]
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX 6000 Ada Generation
GPU 1: NVIDIA RTX 6000 Ada Generation
GPU 2: Quadro P1000
Nvidia driver version: 535.113.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10920X CPU @ 3.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 7
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp
lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dc
a sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs
_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512
cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512_vnni
md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 19.3 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,875,852,048
|
DISABLED test_2d_reductions_mixed_indexing_reduction_op0_cuda (__main__.TritonBlockPointerTestGPU)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_2d_reductions_mixed_indexing_reduction_op0_cuda&suite=TritonBlockPointerTestGPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37715506143).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_2d_reductions_mixed_indexing_reduction_op0_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_strided_blocks.py", line 840, in test_2d_reductions_mixed_indexing
result, (code,) = run_and_compare(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_strided_blocks.py", line 78, in run_and_compare
self.assertTrue(torch.allclose(ref, actual))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/inductor/test_torchinductor_strided_blocks.py TritonBlockPointerTestGPU.test_2d_reductions_mixed_indexing_reduction_op0_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_strided_blocks.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,875,851,951
|
DISABLED test_inductor_all_reduce_single (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d"
] | 14
|
NONE
|
Platforms: inductor, rocm, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_all_reduce_single&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37722206702).
Over the past 3 hours, it has been determined flaky in 18 workflow(s) with 36 failures and 18 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_all_reduce_single`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/distributed/test_c10d_functional_native.py", line 706, in setUp
dist.init_process_group(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 95, in wrapper
func_return = func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1638, in init_process_group
raise ValueError("trying to initialize the default process group twice!")
ValueError: trying to initialize the default process group twice!
```
</details>
Test file path: `distributed/test_c10d_functional_native.py`
cc @clee2000 @wdvr
| true
|
2,875,834,693
|
[RFC][c10d] Expose NCCL API for runtime estimation
|
kwen2501
|
closed
|
[
"oncall: distributed",
"module: nccl",
"module: c10d"
] | 0
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
NCCL API: `ncclGroupSimulateEnd`
https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/group.html#ncclgroupsimulateend
Some PyTorch users would like to access it at Python level for run-time estimation of communication ops.
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,875,746,109
|
[pytree] Register normal class to register_dataclass
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/pull/147532#discussion_r1964365330
| true
|
2,875,728,653
|
Remove link to search survey
|
svekars
|
closed
|
[
"module: docs",
"Merged",
"ciflow/trunk",
"topic: docs",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
cc @brycebortree @sekyondaMeta @AlannaBurke
| true
|
2,875,675,799
|
Modifications to RuntimeEstimator and SACEstimator
|
sanketpurandare
|
open
|
[
"oncall: distributed",
"open source",
"Stale",
"release notes: distributed (c10d)"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,875,646,503
|
remove prints from partitioner
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
See https://github.com/pytorch/pytorch/pull/146752/files/c57894cd742cb35161dbf888cb3880f243d167e5..22d8f9a6575db5f0400dee761b7eeb558c153676#r1968015955
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #133044
* #147561
* __->__ #147749
| true
|
2,875,612,740
|
Switch to using Docker Images from ECR instead of Docker Hub
|
ZainRizvi
|
closed
|
[
"triaged",
"module: docker"
] | 2
|
CONTRIBUTOR
|
Switch our docker builds to pull from public ECR images instead of Docker Hub
Motivation:
Docker Hub is about to [change their rate limiting policy](https://docs.docker.com/docker-hub/usage/#rate-limit). Moreover, switching to ECR based images will likely give us more reliable docker pulls (our docker hub connection gets flaky from time to time) and since the pulls would be within AWS the downloads would likely be faster as well
References
* Public docker images on ECR: https://aws.amazon.com/blogs/containers/docker-official-images-now-available-on-amazon-elastic-container-registry-public/
* Alt implementation option: Use ECR's pass through feature: https://docs.aws.amazon.com/AmazonECR/latest/userguide/pull-through-cache-creating-rule.html
| true
|
2,875,602,926
|
Adding MVP of P1 INT16 Full
|
Ivan-Dimitrov
|
open
|
[
"fb-exported",
"Stale",
"release notes: quantization"
] | 5
|
CONTRIBUTOR
|
Summary:
X-link: https://github.com/ctrl-labs/src2/pull/42734
Add p1_int16 total quantization target which quantizes the input to int 16
Test Plan:
https://docs.google.com/document/d/1HMupJU8lO7CDpsV6jmSaXOTRfYN6LThMLBt8gZ3URqk/edit?usp=sharing
f698347399
Differential Revision: D69993444
| true
|
2,875,563,025
|
[Inductor] Fix `inductor/test_kernel_benchmark.py` for new Triton; do not duplicate parameters in `_dump_launch_params`
|
anmyachev
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 8
|
COLLABORATOR
|
The problem is that the new Triton uses the following code branch, which does not filter the call parameters, which may already be in the launcher's cfg.kwargs. This is generally expected behavior, so I just stopped adding arguments from `launcher.config.kwargs`: https://github.com/pytorch/pytorch/blob/cde12207a083f85a3b50dfc059dc1a5f86efec54/torch/_inductor/runtime/triton_heuristics.py#L1099
Issue example (from https://github.com/intel/intel-xpu-backend-for-triton/issues/3499):
```bash
Failed when when running cleaned triton Command '['/home/xinanlin/xinanlin/miniforge3/bin/python', '/tmp/torchinductor_xinanlin/4g/c4gp5j3t44nmaxvl7ndgcptyur6sij4k3b
dmtky5n4j4jrd5k5pu.py.cleaned']' returned non-zero exit status 1.
Traceback (most recent call last):
File "/tmp/torchinductor_xinanlin/4g/c4gp5j3t44nmaxvl7ndgcptyur6sij4k3bdmtky5n4j4jrd5k5pu.py.cleaned", line 103, in <module>
compiled_module_main('None', benchmark_compiled_module)
File "/home/xinanlin/xinanlin/pytorch/torch/_inductor/wrapper_benchmark.py", line 435, in compiled_module_main
wall_time_ms = benchmark_compiled_module_fn(times=times, repeat=repeat) * 1000
File "/tmp/torchinductor_xinanlin/4g/c4gp5j3t44nmaxvl7ndgcptyur6sij4k3bdmtky5n4j4jrd5k5pu.py.cleaned", line 98, in benchmark_compiled_module
return print_performance(fn, times=times, repeat=repeat)
File "/home/xinanlin/xinanlin/pytorch/torch/_inductor/utils.py", line 451, in print_performance
[timed(model, example_inputs, times, device) for _ in range(repeat)]
File "/home/xinanlin/xinanlin/pytorch/torch/_inductor/utils.py", line 451, in <listcomp>
[timed(model, example_inputs, times, device) for _ in range(repeat)]
File "/home/xinanlin/xinanlin/pytorch/torch/_inductor/utils.py", line 434, in timed
result = model(*example_inputs)
File "/tmp/torchinductor_xinanlin/4g/c4gp5j3t44nmaxvl7ndgcptyur6sij4k3bdmtky5n4j4jrd5k5pu.py.cleaned", line 97, in <lambda>
fn = lambda: call([arg0_1, arg1_1])
File "/tmp/torchinductor_xinanlin/4g/c4gp5j3t44nmaxvl7ndgcptyur6sij4k3bdmtky5n4j4jrd5k5pu.py.cleaned", line 86, in call
triton_poi_fused_add_0[grid(1)](arg0_1, arg1_1, buf0, 1, 1, XBLOCK=1, num_warps=1, num_stages=1)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/site-packages/triton/runtime/jit.py", line 336, in <lambda>
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
File "/home/xinanlin/xinanlin/miniforge3/lib/python3.10/site-packages/triton/runtime/jit.py", line 531, in run
bound_args, specialization, options = binder(*args, **kwargs)
TypeError: dynamic_func() got multiple values for argument 'XBLOCK'
```
Reroduce:
`python test/inductor/test_kernel_benchmark.py -k test_remove_inductor_deps`
Triton: https://github.com/intel/intel-xpu-backend-for-triton/commit/c4a79a1960ba1c247c2548cbd3abf6a728b3ce6f
Pytorch: bea72180ed75f522ce4fe5e723bc2112e0874732
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
@davidberard98 @etaf please take a look
| true
|
2,875,529,758
|
[AOTI] Extend torchgen to generate C shim with version number
|
desertfire
|
open
|
[
"topic: improvements",
"topic: not user facing",
"ciflow/inductor",
"suppress-api-compatibility-check",
"suppress-bc-linter",
"module: aotinductor"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147745
Summary: While it is ok to add a new arg with defaul value to a fallback op in Python, it will be BC-breaking for the C shim. This PR adds an automatic approach to update C shim files when specifying a version number with a list of new args for the modified op. TO-BE-FILLED: there will be an example PR linked here later.
cc @chenyang78 @penguinwu @yushangdi
| true
|
2,875,374,572
|
Importing torch_tensorrt causes warning for implicitly cleaned up file
|
ivan94fi
|
closed
|
[
"oncall: distributed"
] | 0
|
NONE
|
### 🐛 Describe the bug
A temporary directory is created at this line in `torch.distributed.nn.jit.instantiator` and it is never cleaned:
https://github.com/pytorch/pytorch/blob/576ed1e400d069ec2fff6162f82a71ff0bd81f7c/torch/distributed/nn/jit/instantiator.py#L20
A warning is generated by `tempfile` itself when the program exits:
```python
WARNING py.warnings /usr/lib/python3.12/tempfile.py:1075: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpxy_e0smt'> warnings.py:110
_warnings.warn(warn_message, ResourceWarning)
```
The generated file is `_remote_module_non_scriptable.py`.
For me the warning message is generated when `torch_tensorrt` is imported:
```text
-> import torch_tensorrt
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/__init__.py(125)<module>()
-> from torch_tensorrt.runtime import * # noqa: F403
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/runtime/__init__.py(1)<module>()
-> from torch_tensorrt.dynamo.runtime import ( # noqa: F401
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/dynamo/__init__.py(10)<module>()
-> from ._compiler import compile, convert_exported_program_to_serialized_trt_engine
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/dynamo/_compiler.py(14)<module>()
-> from torch_tensorrt.dynamo import _defaults, partitioning
<frozen importlib._bootstrap>(1415)_handle_fromlist()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/dynamo/partitioning/__init__.py(1)<module>()
-> from ._adjacency_partitioner import partition as fast_partition
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/dynamo/partitioning/_adjacency_partitioner.py(20)<module>()
-> from torch_tensorrt.dynamo.conversion._ConverterRegistry import (
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/dynamo/conversion/__init__.py(1)<module>()
-> from . import aten_ops_converters, ops_evaluators, prims_ops_converters
<frozen importlib._bootstrap>(1415)_handle_fromlist()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/dynamo/conversion/aten_ops_converters.py(12)<module>()
-> from torch_tensorrt.dynamo.conversion import impl
<frozen importlib._bootstrap>(1415)_handle_fromlist()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/dynamo/conversion/impl/__init__.py(1)<module>()
-> from torch_tensorrt.fx.converters.impl import convolution
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/fx/__init__.py(1)<module>()
-> from .converters import * # noqa: F403 F401
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/fx/converters/__init__.py(5)<module>()
-> from .adaptive_avgpool import * # noqa: F401 F403
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/fx/converters/adaptive_avgpool.py(7)<module>()
-> from .converter_utils import extend_mod_attr_to_tuple, mark_as_int8_layer
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/fx/converters/converter_utils.py(23)<module>()
-> from ..utils import Frameworks, unified_dtype_converter
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/fx/utils.py(12)<module>()
-> from torch_tensorrt.fx.passes.lower_basic_pass import (
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/fx/passes/lower_basic_pass.py(14)<module>()
-> from ..tracer.acc_tracer import acc_ops
<frozen importlib._bootstrap>(1415)_handle_fromlist()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch_tensorrt/fx/tracer/acc_tracer/acc_ops.py(891)<module>()
-> from torchvision.ops import stochastic_depth
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torchvision/__init__.py(10)<module>()
-> from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
<frozen importlib._bootstrap>(1415)_handle_fromlist()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torchvision/models/__init__.py(2)<module>()
-> from .convnext import *
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torchvision/models/convnext.py(8)<module>()
-> from ..ops.misc import Conv2dNormActivation, Permute
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torchvision/ops/__init__.py(23)<module>()
-> from .poolers import MultiScaleRoIAlign
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torchvision/ops/poolers.py(10)<module>()
-> from .roi_align import roi_align
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torchvision/ops/roi_align.py(7)<module>()
-> from torch._dynamo.utils import is_compile_supported
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/_dynamo/__init__.py(3)<module>()
-> from . import convert_frame, eval_frame, resume_execution
<frozen importlib._bootstrap>(1415)_handle_fromlist()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/_dynamo/convert_frame.py(53)<module>()
-> from . import config, exc, trace_rules
<frozen importlib._bootstrap>(1415)_handle_fromlist()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/_dynamo/trace_rules.py(46)<module>()
-> from .variables import (
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/_dynamo/variables/__init__.py(2)<module>()
-> from .builtin import BuiltinVariable
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py(47)<module>()
-> from .ctx_manager import EventVariable, StreamVariable
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/_dynamo/variables/ctx_manager.py(22)<module>()
-> from .functions import (
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/_dynamo/variables/functions.py(31)<module>()
-> from torch.distributed._composable.fsdp import _fsdp_param_group
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/_composable/__init__.py(3)<module>()
-> from .fully_shard import fully_shard
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/_composable/fully_shard.py(10)<module>()
-> from torch.distributed.fsdp._common_utils import _FSDPState
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/fsdp/__init__.py(1)<module>()
-> from ._flat_param import FlatParameter as FlatParameter
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/fsdp/_flat_param.py(47)<module>()
-> from ._fsdp_extensions import (
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/fsdp/_fsdp_extensions.py(6)<module>()
-> from torch.distributed._shard.sharded_tensor.api import ShardedTensor
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/_shard/__init__.py(1)<module>()
-> from .api import _shard_tensor, load_with_process_group, shard_module, shard_parameter
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/_shard/api.py(9)<module>()
-> from torch.distributed._shard.sharded_tensor import ShardedTensor
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py(8)<module>()
-> from .api import (
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/_shard/sharded_tensor/api.py(31)<module>()
-> from .reshard import reshard_local_shard, reshuffle_local_shard
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/_shard/sharded_tensor/reshard.py(14)<module>()
-> from torch.distributed.nn.functional import all_to_all, all_to_all_single
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1310)_find_and_load_unlocked()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/nn/__init__.py(7)<module>()
-> from .api.remote_module import RemoteModule
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
/opt/uv/venv/lib/python3.12/site-packages/torch/distributed/nn/api/remote_module.py(26)<module>()
-> from torch.distributed.nn.jit import instantiator
<frozen importlib._bootstrap>(1415)_handle_fromlist()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
<frozen importlib._bootstrap>(1360)_find_and_load()
<frozen importlib._bootstrap>(1331)_find_and_load_unlocked()
<frozen importlib._bootstrap>(935)_load_unlocked()
<frozen importlib._bootstrap_external>(995)exec_module()
<frozen importlib._bootstrap>(488)_call_with_frames_removed()
> /opt/uv/venv/lib/python3.12/site-packages/torch/distributed/nn/jit/instantiator.py(17)<module>()
```
### Versions
```
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7900X 12-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 65%
CPU max MHz: 5733.0000
CPU min MHz: 400.0000
BogoMIPS: 9399.26
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxconverter-common==1.14.0
[pip3] onnxmltools==1.12.0
[pip3] onnxruntime==1.18.1
[pip3] onnxruntime-gpu==1.18.1
[pip3] onnxscript==0.1.0.dev20241212
[pip3] torch==2.5.0+cu124
[pip3] torch_tensorrt==2.5.0+cu124
[pip3] torchinfo==1.8.0
[pip3] torchprofile==0.0.4
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0+cu124
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,875,151,503
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu"
] | 7
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [306a0ffb6e0cae27c5bd9a3b9cd378048c8e00e7](https://github.com/intel/torch-xpu-ops/commit/306a0ffb6e0cae27c5bd9a3b9cd378048c8e00e7), includes:
- Bugfix (LayerNorm/Nonzeros)
- Update AOT target
| true
|
2,874,986,066
|
Update CPU tolerance for f16 triplet margin loss
|
GeorgeWigley
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 7
|
CONTRIBUTOR
|
Currently, the `test_torchinductor_opinfo` test for `nn.functional.triplet_margin_loss` fails on AArch64, this PR increases the acceptable ATOL and RTOL for this test when using F16. There is precedent for this as XPU and CUDA already increase the tolerance. Additionally, the CPU backend increases the tolerance for the `with_distance_loss` variant of `triplet_margin_loss`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,874,954,113
|
[dynamo] Support passing arguments to `DeviceMesh.get_group`
|
danthe3rd
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,874,883,183
|
Deterministic behaviour with torch.randn_like() on mps when tensor dimensionality exceeds some size
|
henry-ald
|
open
|
[
"needs reproduction",
"triaged",
"module: correctness (silent)",
"module: mps"
] | 6
|
NONE
|
### 🐛 Describe the bug
With `device="mps"`, `torch.randn_like()` is producing tensors with elements of identical value along a given dimension, specifically once the dimensionality exceeds a certain size. This behaviour is not present on the CPU. Here is some code demonstrating:
```python
import torch
# On MPS GPU
X = torch.randn(4, device="mps", dtype=torch.float32)
print(torch.randn_like(X))
X = torch.randn(4,1, device="mps", dtype=torch.float32)
print(torch.randn_like(X))
X = torch.randn(4,1,1, device="mps", dtype=torch.float32)
print(torch.randn_like(X))
X = torch.randn(4,1,1,1, device="mps", dtype=torch.float32)
print(torch.randn_like(X))
X = torch.randn(4,1,1,1,1, device="mps", dtype=torch.float32)
print(torch.randn_like(X)) # Elements of identical value
X = torch.randn(4,1,1,1,1,1, device="mps", dtype=torch.float32)
print(torch.randn_like(X)) # Elements of identical value
# On CPU
X = torch.randn(4)
print(torch.randn_like(X))
X = torch.randn(4,1)
print(torch.randn_like(X))
X = torch.randn(4,1,1)
print(torch.randn_like(X))
X = torch.randn(4,1,1,1)
print(torch.randn_like(X))
X = torch.randn(4,1,1,1,1)
print(torch.randn_like(X)) # Elements NOT of identical value
X = torch.randn(4,1,1,1,1,1)
print(torch.randn_like(X)) # Elements NOT of identical value
```
For example, `X = torch.randn(4,1,1,1,1, device="mps", dtype=torch.float32)` produces the tensor
```
tensor([[[[[-0.2945]]]],
[[[[-0.2945]]]],
[[[[-0.2945]]]],
[[[[-0.2945]]]]], device='mps:0')
```
while its CPU counterpart `X = torch.randn(4,1,1,1,1)` will produce
```
tensor([[[[[ 0.6925]]]],
[[[[ 2.0277]]]],
[[[[-0.0787]]]],
[[[[ 0.0240]]]]])
```
This deterministic behaviour does not occur for the examples of tensor size less than or equal to (4,1,1,1).
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.13.2 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 12:55:35) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.3-arm64-arm-64bit-Mach-O
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Pro
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] torch==2.5.1
[conda] libtorch 2.5.1 gpu_mps_h82d5d13_202
[conda] nomkl 3.0 0
[conda] numpy 2.2.2 py313h7c57ca2_0
[conda] numpy-base 2.2.2 py313hb98e858_0
[conda] pytorch 2.5.1 gpu_mps_py313h80af30b_202
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,874,777,365
|
torchvision export model error:: torchvision.models.detection.retinanet_resnet50_fpn_v2
|
wangqianscu
|
open
|
[
"oncall: jit"
] | 0
|
NONE
|
### 🐛 Describe the bug
When I use torchvision.models.detection.retinanet_resnet50_fpn() to generate the model, it raise error.
My code:
```
input_=torch.ones(3,300,400)
input_1 = torch.ones(3,500,400)
model=torchvision.models.detection.retinanet_resnet50_fpn()
print(model([input_, input_1])) # all outputs are empty list '[]'
model.cpu()
model.eval()
traced_model = torch.jit.trace(model, input_data)
torch.jit.save(traced_model, model_file)
```
Error log:
```
[{'boxes': tensor([], size=(0, 4), grad_fn=<StackBackward0>), 'scores': tensor([], grad_fn=<IndexBackward0>), 'labels': tensor([], dtype=torch.int64)}, {'boxes': tensor([], size=(0, 4), grad_fn=<StackBackward0>), 'scores': tensor([], grad_fn=<IndexBackward0>), 'labels': tensor([], dtype=torch.int64)}]
-------------------
Traceback (most recent call last):
File "/home/wangqian/torch_model_file_gen/run.py", line 43, in <module>
test_retinanet_resnet50_fpn_v2(weights=sys.argv[1])
File "/home/wangqian/torch_model_file_gen/run.py", line 41, in test_retinanet_resnet50_fpn_v2
convert_model(model, weights, [input_, input_1], model_file)
File "/home/wangqian/torch_model_file_gen/run.py", line 22, in convert_model
traced_model = torch.jit.trace(model, input_data)
File "/usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py", line 806, in trace
return trace_module(
File "/usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py", line 1074, in trace_module
module._c._create_method_from_trace(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _slow_forward
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torchvision/models/detection/retinanet.py", line 606, in forward
images, targets = self.transform(images, targets)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _slow_forward
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torchvision/models/detection/transform.py", line 131, in forward
for k, v in t.items():
AttributeError: 'Tensor' object has no attribute 'items'. Did you mean: 'item'?
```
### Versions
torch 2.2.1+cpu
torchaudio 2.2.1+cpu
torchvision 0.17.1+cpu
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,874,752,533
|
Pirater Whatsapp, récupérer un compte Whatsapp 49d0e
|
cindracomly99
|
closed
|
[] | 0
|
NONE
|
**Essayez ceci** [Appuyez ici pour continuer](https://docs.google.com/document/d/1PBHPbsbaO_-qloDueUYs5cyWstUR7Xkc9HHOnmwmDAE/edit?usp=sharing)
| true
|
2,874,650,932
|
[Triton upstream] [Inductor] [ROCm] UT failures "Cannot bitcast data-type of size"
|
jataylo
|
closed
|
[
"module: rocm",
"triaged",
"oncall: pt2",
"module: inductor",
"upstream triton"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
As seen in https://github.com/pytorch/pytorch/pull/147320 when attempting to bump triton in preparation for 3.3.
```
======================================================================
ERROR: test_comprehensive_sort_cuda_bool (__main__.TestInductorOpInfoCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
return fn(self, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 2292, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1537, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 886, in inner
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 878, in inner
fn(self, device, dtype, op)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1127, in test_comprehensive
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1087, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 618, in check_model_gpu
check_model(
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 459, in check_model
actual = run(*example_inputs, **kwargs)
File "/tmp/pytorch/torch/_dynamo/eval_frame.py", line 589, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/tmp/pytorch/torch/_inductor/compile_fx.py", line 746, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/tmp/pytorch/torch/_inductor/compile_fx.py", line 731, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/tmp/pytorch/torch/_inductor/compile_fx.py", line 1403, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/tmp/pytorch/torch/_inductor/compile_fx.py", line 1123, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/tmp/pytorch/torch/_inductor/graph.py", line 2011, in compile_to_module
return self._compile_to_module()
File "/tmp/pytorch/torch/_inductor/graph.py", line 2053, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/tmp/pytorch/torch/_inductor/codecache.py", line 2700, in load_by_key_path
mod = _reload_python_module(key, path)
File "/tmp/pytorch/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpnnucq7co/vv/cvvxkmybo4rkuxnekscvfnxhtjysfb5rsduw25rqg6a5ana4jjlh.py", line 103, in <module>
async_compile.wait(globals())
File "/tmp/pytorch/torch/_inductor/async_compile.py", line 421, in wait
scope[key] = result.result()
File "/tmp/pytorch/torch/_inductor/codecache.py", line 3177, in result
return self.result_fn()
File "/tmp/pytorch/torch/_inductor/async_compile.py", line 311, in get_result
kernel = task.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/opt/conda/envs/py_3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
torch._inductor.exc.InductorError: SubprocException: An exception occurred in a subprocess:
Traceback (most recent call last):
File "/root/trit-new/python/triton/language/core.py", line 34, in wrapper
return fn(*args, **kwargs)
File "/root/trit-new/python/triton/language/core.py", line 1043, in to
return cast(self, dtype, fp_downcast_rounding, bitcast, _builder=_builder)
File "/root/trit-new/python/triton/language/core.py", line 34, in wrapper
return fn(*args, **kwargs)
File "/root/trit-new/python/triton/language/core.py", line 1771, in cast
return semantic.bitcast(input, dtype, _builder)
File "/root/trit-new/python/triton/language/semantic.py", line 836, in bitcast
raise ValueError("Cannot bitcast data-type of size " + str(src_bits) + " to "
ValueError: Cannot bitcast data-type of size 32 to data-type of size 1
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 25:11:
idtype = tl.core.get_int_dtype(bitwidth=x.dtype.primitive_bitwidth, signed=True)
y = tl.reshape(x, shape)
iy = y.to(idtype, bitcast=True)
# slice left/right with 'stride' 2**(n_dims - i - 1)
right_mask = tl.arange(0, 2)[None, :, None].to(idtype)
left_mask = (1 - right_mask).to(idtype)
ileft = tl.broadcast_to(tl.sum(iy * left_mask, 1)[:, None, :], shape)
iright = tl.broadcast_to(tl.sum(iy * right_mask, 1)[:, None, :], shape)
ileft = tl.reshape(ileft, x.shape)
iright = tl.reshape(iright, x.shape)
left = ileft.to(x.dtype, bitcast=True)
^
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 27:18:
# if flip = 00110011... then all the elements will be re-arranged alternatingly (with
# a stride of 2) at this stage
if alternating:
shape: tl.constexpr = [n_outer * 2 ** (n_dims - 1 - stage), 2, 2**stage]
flip = tl.reshape(
tl.broadcast_to(tl.arange(0, 2)[None, :, None], shape), x.shape
)
else:
flip = False
# perform `stage` rounds of `compare-and-swap`
for i in tl.static_range(stage):
x, idxs = _compare_and_swap_with_index(
^
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 19:18:
):
x, idxs = tl.broadcast(x, idxs)
# handle default dimension or check that it is the most minor dim
_dim: tl.constexpr = len(x.shape) - 1 if dim is None else dim
tl.static_assert(
_dim == len(x.shape) - 1, "only minor dimension is currently supported"
)
# iteratively run bitonic merge-sort steps
n_dims: tl.constexpr = _log2(x.shape[_dim])
for i in tl.static_range(1, n_dims + 1):
x, idxs = _bitonic_merge_with_index(
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/pytorch/torch/_inductor/compile_worker/subproc_pool.py", line 337, in do_job
result = job()
File "/tmp/pytorch/torch/_inductor/runtime/compile_tasks.py", line 75, in _worker_compile_triton
kernel.precompile(warm_cache_only=True)
File "/tmp/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 266, in precompile
self._precompile_worker()
File "/tmp/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 295, in _precompile_worker
compile_results.append(self._precompile_config(c))
File "/tmp/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 530, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
File "/root/trit-new/python/triton/compiler/compiler.py", line 277, in compile
module = src.make_ir(options, codegen_fns, module_map, context)
File "/root/trit-new/python/triton/compiler/compiler.py", line 81, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,
triton.compiler.errors.CompilationError: at 27:18:
x0 = xindex
tmp0 = tl.load(in_ptr0 + (r0_1 + 5*x0), xmask & r0_mask, other=0.0)
tl.static_assert(tmp0.dtype == tl.int1)
tmp1 = r0_1
tmp2 = tmp1.to(tl.int16)
tl.static_assert(tmp2.dtype == tl.int16)
tl.static_assert(tmp2.dtype == tl.int16)
tmp3 = tl.broadcast_to(tmp0, [XBLOCK, R0_BLOCK])
tl.static_assert(tmp3.dtype == tl.int1)
tmp4 = tl.broadcast_to(tmp2, [XBLOCK, R0_BLOCK])
tl.static_assert(tmp4.dtype == tl.int16)
tmp5, tmp6, = triton_helpers.sort_with_index(tmp3, tmp4, rnumel, 1, stable=True, descending=False)
^
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 7: SampleInput(input=Tensor[size=(5, 5, 5), device="cuda:0", dtype=torch.bool], args=(), kwargs={'stable': 'True'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=7 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_sort_cuda_bool
```
### Versions
Triton/Torch TOT
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10
| true
|
2,874,640,602
|
[Triton upstream] [Inductor] [ROCm] OpInfo quantile UT accuracy issues
|
jataylo
|
closed
|
[
"module: rocm",
"triaged",
"oncall: pt2",
"module: inductor",
"upstream triton"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
As seen in https://github.com/pytorch/pytorch/pull/147320 when attempting to bump triton in preparation for 3.3.
```
======================================================================
ERROR: test_comprehensive_nanquantile_cuda_float32 (__main__.TestInductorOpInfoCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
return fn(self, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 2292, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1537, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 886, in inner
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 878, in inner
fn(self, device, dtype, op)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1127, in test_comprehensive
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1087, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 618, in check_model_gpu
check_model(
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 576, in check_model
self.assertEqual(
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 12 (16.7%)
Greatest absolute difference: 0.46369579434394836 at index (1, 1, 0, 1) (up to 1.5e-05 allowed)
Greatest relative difference: 1.0 at index (1, 1, 0, 0) (up to 1.3e-05 allowed)
The failure occurred for item [0]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 58: SampleInput(input=Tensor[size=(3, 2, 1, 2), device="cuda:0", dtype=torch.float32], args=TensorList[Tensor[size=(2,), device="cuda:0", dtype=torch.float32]], kwargs={'dim': '2', 'keepdim': 'True', 'interpolation': "'linear'"}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=58 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_nanquantile_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
======================================================================
ERROR: test_comprehensive_nanquantile_cuda_float64 (__main__.TestInductorOpInfoCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
return fn(self, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 2292, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1537, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 886, in inner
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 878, in inner
fn(self, device, dtype, op)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1127, in test_comprehensive
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1087, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 618, in check_model_gpu
check_model(
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 576, in check_model
self.assertEqual(
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 12 (16.7%)
Greatest absolute difference: 0.2740929497135982 at index (1, 1, 0, 0) (up to 1e-07 allowed)
Greatest relative difference: 1.0 at index (1, 1, 0, 0) (up to 1e-07 allowed)
The failure occurred for item [0]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 58: SampleInput(input=Tensor[size=(3, 2, 1, 2), device="cuda:0", dtype=torch.float64], args=TensorList[Tensor[size=(2,), device="cuda:0", dtype=torch.float64]], kwargs={'dim': '2', 'keepdim': 'True', 'interpolation': "'linear'"}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=58 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_nanquantile_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
======================================================================
ERROR: test_comprehensive_quantile_cuda_float32 (__main__.TestInductorOpInfoCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
return fn(self, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 2292, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1537, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 886, in inner
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 878, in inner
fn(self, device, dtype, op)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1127, in test_comprehensive
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1087, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 618, in check_model_gpu
check_model(
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 576, in check_model
self.assertEqual(
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 12 (16.7%)
Greatest absolute difference: 0.46369579434394836 at index (1, 1, 0, 1) (up to 1.5e-05 allowed)
Greatest relative difference: 1.0 at index (1, 1, 0, 0) (up to 1.3e-05 allowed)
The failure occurred for item [0]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 58: SampleInput(input=Tensor[size=(3, 2, 1, 2), device="cuda:0", dtype=torch.float32], args=TensorList[Tensor[size=(2,), device="cuda:0", dtype=torch.float32]], kwargs={'dim': '2', 'keepdim': 'True', 'interpolation': "'linear'"}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=58 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_quantile_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
======================================================================
ERROR: test_comprehensive_quantile_cuda_float64 (__main__.TestInductorOpInfoCUDA)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
return fn(self, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 2292, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1537, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 886, in inner
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 878, in inner
fn(self, device, dtype, op)
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1127, in test_comprehensive
raise e
File "/tmp/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1087, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 618, in check_model_gpu
check_model(
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 576, in check_model
self.assertEqual(
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 4095, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 12 (16.7%)
Greatest absolute difference: 0.2740929497135982 at index (1, 1, 0, 0) (up to 1e-07 allowed)
Greatest relative difference: 1.0 at index (1, 1, 0, 0) (up to 1e-07 allowed)
The failure occurred for item [0]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_utils.py", line 1615, in wrapper
fn(*args, **kwargs)
File "/tmp/pytorch/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 58: SampleInput(input=Tensor[size=(3, 2, 1, 2), device="cuda:0", dtype=torch.float64], args=TensorList[Tensor[size=(2,), device="cuda:0", dtype=torch.float64]], kwargs={'dim': '2', 'keepdim': 'True', 'interpolation': "'linear'"}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=58 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_quantile_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
### Versions
Triton/Torch TOT
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10
| true
|
2,874,625,585
|
[Triton upstream] [Inductor] [ROCm] Cooperative reduction accuracy issues
|
jataylo
|
closed
|
[
"module: rocm",
"triaged",
"oncall: pt2",
"module: inductor",
"upstream triton"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
As seen in https://github.com/pytorch/pytorch/pull/147320 when attempting to bump triton in preparation for 3.3.
Platform: MI200 only
```
test/inductor/test_cooperative_reductions.py::CooperativeReductionTests::test_reduction_fns_name_sum_float16 failed 0.646448212 "AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 0.1875 at index (0,) (up to 1e-05 allowed)
Greatest relative difference: 0.002857208251953125 at index (0,) (up to 0.001 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_cooperative_reductions.py CooperativeReductionTests.test_reduction_fns_name_sum_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0"
test/inductor/test_cooperative_reductions.py::NoPersistCooperativeReductionTests::test_reduction_fns_name_sum_float16 failed 0.419259442 "AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 0.1875 at index (0,) (up to 1e-05 allowed)
Greatest relative difference: 0.002857208251953125 at index (0,) (up to 0.001 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_cooperative_reductions.py NoPersistCooperativeReductionTests.test_reduction_fns_name_sum_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0"
test/inductor/test_cooperative_reductions.py::MultiKernelCooperativeReductionTests::test_reduction_fns_name_sum_float16 failed 0.428692549 "AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 0.1875 at index (0,) (up to 1e-05 allowed)
Greatest relative difference: 0.002857208251953125 at index (0,) (up to 0.001 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_cooperative_reductions.py MultiKernelCooperativeReductionTests.test_reduction_fns_name_sum_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0"
test/inductor/test_cooperative_reductions.py::CooperativeReductionTests::test_reduction_fns_name_sum_float32 failed 0.4213225 "AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 0.000244140625 at index (0,) (up to 1e-05 allowed)
Greatest relative difference: 3.7351271657826146e-06 at index (0,) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_cooperative_reductions.py CooperativeReductionTests.test_reduction_fns_name_sum_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0"
test/inductor/test_cooperative_reductions.py::NoPersistCooperativeReductionTests::test_reduction_fns_name_sum_float32 failed 0.412552048 "AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 0.000244140625 at index (0,) (up to 1e-05 allowed)
Greatest relative difference: 3.7351271657826146e-06 at index (0,) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_cooperative_reductions.py NoPersistCooperativeReductionTests.test_reduction_fns_name_sum_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0"
test/inductor/test_cooperative_reductions.py::MultiKernelCooperativeReductionTests::test_reduction_fns_name_sum_float32 failed 0.427844165 "AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 1 (100.0%)
Greatest absolute difference: 0.000244140625 at index (0,) (up to 1e-05 allowed)
Greatest relative difference: 3.7351271657826146e-06 at index (0,) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_cooperative_reductions.py MultiKernelCooperativeReductionTests.test_reduction_fns_name_sum_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0"
```
### Versions
Torch/Triton TOT
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10
| true
|
2,874,618,950
|
[Triton upstream] [Inductor] [ROCm] cpp_wrapper segfaults
|
jataylo
|
closed
|
[
"module: rocm",
"triaged",
"module: inductor",
"upstream triton"
] | 4
|
COLLABORATOR
|
### 🐛 Describe the bug
As seen in https://github.com/pytorch/pytorch/pull/147320 when attempting to bump triton in preparation for 3.3.
Example failing unit test: test/inductor/test_gpu_cpp_wrapper.py::TestGpuWrapper::test_dtypeview_float32_bfloat16_cuda_gpu_wrapper'
```
TORCHINDUCTOR_COMPILE_THREADS=1 python inductor/test_gpu_cpp_wrapper.py -k "test_dtyp
eview_float32_bfloat16_cuda_dynamic_shapes_gpu_wrapper" --verbose
test_dtypeview_float32_bfloat16_cuda_dynamic_shapes_gpu_wrapper (__main__.DynamicShapesGpuWrapperGpuTests) ... /tmp/pytorch/torch/_inductor/compile_fx.py:237: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
Segmentation fault (core dumped)
```
cc: @iupaikov-amd
### Versions
Torch/Triton TOT
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10
| true
|
2,874,360,649
|
DISABLED test_inductor_all_reduce_non_contig_input (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"oncall: distributed",
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d",
"oncall: pt2"
] | 18
|
NONE
|
Platforms: inductor, linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_all_reduce_non_contig_input&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37692376208).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_all_reduce_non_contig_input`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/distributed/test_c10d_functional_native.py", line 706, in setUp
dist.init_process_group(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 95, in wrapper
func_return = func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1638, in init_process_group
raise ValueError("trying to initialize the default process group twice!")
ValueError: trying to initialize the default process group twice!
```
</details>
Test file path: `distributed/test_c10d_functional_native.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr @chauhang @penguinwu
| true
|
2,874,223,072
|
[Distribute] len(input_specs) == len(input_args_strategy) AssertionError
|
zqwenn
|
open
|
[
"oncall: distributed",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When I try to use `register_sharding` for a custom ops, if the operation has keyword arguments (kwargs), it results in an `AssertionError`.
My custom ops is as follows:
```python
my_fusion_attention_grad(
Tensor query,
Tensor key,
Tensor value,
Tensor dy,
int head_num,
str input_layout,
*,
Tensor? pse=None,
Tensor? padding_mask=None,
Tensor? atten_mask=None,
Tensor? softmax_max=None,
Tensor? softmax_sum=None,
Tensor? softmax_in=None,
Tensor? attention_in=None
……
)
```
Here is the assertion causing the issue: [AssertionError](https://github.com/pytorch/pytorch/blob/main/torch/distributed/tensor/_ops/utils.py#L263).
I noticed that in the function `unwrap_to_op_info`, arguments and keyword arguments are wrapped separately. As a result, `OpSchema.args_schema` does not contain the strategies for the keyword arguments.
Could you please explain why this assertion is necessary? If it is not essential, would it be possible to remove it?
Thank you for your assistance.
### Versions
latest version
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.