id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,913,029,543
|
pin_memory() function doesn't work when it is called before lazy device initalization
|
BartlomiejStemborowski
|
closed
|
[] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When a **pin_memory()** is called on a tensor, before the device is initialized, then the **is_pinned()** always return false. I believe this was broken by a #145752 PR and to be more precise by not calling lazyInitDevice in **_pin_memory** function.
Reproduction:
```
import torch
ifm = torch.tensor([2])
ifm = ifm.pin_memory()
print(ifm.is_pinned())
````
Output: false
Expected: True
Below code works, as pining memory with a pin_memory param works fine and it initializes the device.
````
import torch
ifm = torch.tensor([2], pin_memory=True)
print(ifm.is_pinned())
ifm = torch.tensor([2])
ifm = ifm.pin_memory()
print(ifm.is_pinned())
````
Output: True
True
Expected: True
True
@ngimel @albanD
### Versions
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.34
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] nvtx==0.2.11
[pip3] optree==0.14.1
[pip3] pynvjitlink-cu12==0.5.1
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250311+cu128
[pip3] torchaudio==2.6.0.dev20250311+cu128
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.22.0.dev20250311+cu128
[pip3] triton==3.1.0
[conda] Could not collect
| true
|
2,912,993,241
|
Add AOTI shim for _weight_int4pack_mm_cpu_tensor
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149031
**Summary**
Previous implementation of shim did not align with the design and it was removed by https://github.com/pytorch/pytorch/pull/148907
This PR adds it back in the files of MKLDNN backend and re-enable the CPP wrapper UT.
**Test plan**
```
pytest -s test/inductor/test_cpu_cpp_wrapper.py -k test_woq_int4
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,912,720,548
|
[ca] fix lazily compiled aot bwd
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 6
|
MEMBER
|
FIXES https://github.com/pytorch/pytorch/issues/137372
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149014
* #149064
* #148801
* __->__ #149030
* #148799
sometimes, the aot bwd is lowered lazily. so the bw_module we saved in CompiledFunction._lazy_backward_info hasn't gone through post grad passes, specifically the view_to_reshape pass. Running that directly will then sometimes error, because the AOT forward has already changed its views to reshapes, and it is reflected in the gradients we see in CA.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,912,698,379
|
reshard_after_forward does not work as expected in FSDP2
|
caiqi
|
closed
|
[
"oncall: distributed",
"module: fsdp"
] | 3
|
NONE
|
### 🐛 Describe the bug
@awgu When enabling the reshard_after_forward flag, parameters appear to remain unsharded even after the forward pass completes. While this works as expected for simple networks, the text encoder module from HuggingFace Transformers exhibits a memory increase after forward propagation even within a torch.no_grad() context. Manually invoking reshard() post-forward reduces memory usage suggesting automatic resharding is not occurring as intended.
Observations:
- Minimal Example Works: Basic networks behave correctly with reshard_after_forward.
- Transformer Text Encoder Fails: Memory usage grows after forward passes in no_grad mode, implying parameters are retained in unsharded state.
- Manual Intervention Resolves: Explicitly calling reshard() post-forward reduces memory.
- Reproducibility: A minimal reproducible example is provided below.
```
import torch
import torch.distributed as dist
from torch import nn
from torch.distributed.fsdp import fully_shard, MixedPrecisionPolicy, FSDPModule
import os
from diffusers import DiffusionPipeline, StableDiffusion3Pipeline
from transformers.models.t5.modeling_t5 import T5Block
class SimpleNet(nn.Module):
def __init__(self, *args, **kwargs):
super(SimpleNet, self).__init__()
self.nets = nn.ModuleList()
for i in range(40):
self.nets.append(nn.Conv2d(4096, 4096, 3, padding=1))
self.attn_stream = torch.cuda.Stream()
def forward(self, x):
for layer in self.nets:
x = layer(x)
return x
def print_memory(desp):
rank = int(os.environ['RANK'])
torch.cuda.empty_cache()
if rank == 0 :
print(f"{desp} Memory: ", torch.cuda.memory_reserved() / 1024 / 1024, "MB")
def recursive_reshard(module: nn.Module):
for n, m in reversed(list(module.named_modules())):
if isinstance(m, FSDPModule):
m.reshard()
module.reshard()
if "__main__" == __name__:
dist.init_process_group(backend='nccl')
local_rank = int(os.environ['LOCAL_RANK'])
rank = int(os.environ['RANK'])
world_size = int(os.environ['WORLD_SIZE'])
torch.cuda.set_device(local_rank)
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16,
reduce_dtype=torch.bfloat16,
output_dtype=torch.bfloat16,
cast_forward_inputs=True
)
model = SimpleNet()
model = model.to("cuda", torch.bfloat16)
model_params = sum(p.numel() for p in model.parameters()) / 1e6
print_memory(f"Model params: {model_params}M")
for module in model.modules():
if isinstance(module, nn.Conv2d):
fully_shard(module, reshard_after_forward=True, mp_policy=mp_policy)
fully_shard(model, mp_policy=mp_policy)
pipeline_utils = DiffusionPipeline.from_pretrained("./stable-diffusion-3-medium-diffusers",text_encoder = None, text_encoder_2 = None, vae = None, transformer=None)
for module in pipeline_utils.text_encoder_3.modules():
if isinstance(module, T5Block):
fully_shard(module, reshard_after_forward=True, mp_policy=mp_policy)
fully_shard(pipeline_utils.text_encoder_3, mp_policy=mp_policy)
text_encoder_params = sum(p.numel() for p in pipeline_utils.text_encoder_3.parameters()) / 1e6
print_memory(f"Text encoder params: {text_encoder_params}M")
model.requires_grad_(False)
print_memory("after init model with fsdp")
fake_x = torch.randn(1, 4096, 16, 16, device="cuda", dtype=torch.bfloat16)
with torch.no_grad():
target = model(fake_x)
print_memory("SimpleNet forward finished")
model.reshard()
print_memory("SimpleNet reshard finished")
with torch.no_grad():
text_inputs = pipeline_utils.tokenizer_3(
"a prompt",
padding="max_length",
max_length=256,
truncation=True,
add_special_tokens=True,
return_tensors="pt",
).input_ids
prompt_embeds = pipeline_utils.text_encoder_3(text_inputs.to("cuda"))[0]
print_memory("Encode prompt finished")
pipeline_utils.text_encoder_3.reshard()
print_memory("Text encoder reshard finished")
dist.destroy_process_group()
print_memory("Done")
```

### Versions
2.7.0.dev20250107+cu124
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,912,631,085
|
[Intel gpu] always set deterministic for xpu accuracy test
|
jianyizh
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"ciflow/xpu",
"release notes: xpu"
] | 19
|
CONTRIBUTOR
|
On Intel Max 1550, models like Super_SloMo can actually pass accuracy test after set deterministic, because we do not use atomic in upsampling bilinear backward in some cases when running on XPU. Furthermore, I guess the only reason not to set deterministic on these models is just avoiding errors. We should use warn_only = True.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,912,619,830
|
[Inductor][Optimus] split cat aten pass
|
mengluy0125
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9
|
CONTRIBUTOR
|
Summary:
We add the aten pattern to optimize big cat node with arbitrary order of inputs to support APS jobs
context: https://docs.google.com/document/d/1G2qFcQu1K7VXbz2uPe0CS2aBirnwtwI_B8lxmlBlAPQ/edit?tab=t.0
Test Plan:
### how to enable
Add the following patterns to the post grad
```
post_grad_fusion_options={
"normalization_aten_pass": {},
"split_cat_aten_pass": {"threshold_to_cat": 10},
},
```
You can tune threshold_to_cat to achieve best performance. If nothing gives, the default value 10 will be used
### unit test
```
buck2 test 'fbcode//mode/dev-nosan' fbcode//caffe2/test/inductor:split_cat_fx_aten_passes -- test_split_cat_post_grad
```
Buck UI: https://www.internalfb.com/buck2/9e52168d-c107-4be8-a46b-b9d239f5c50d
Test UI: https://www.internalfb.com/intern/testinfra/testrun/17732923605061752
Network: Up: 112KiB Down: 132KiB (reSessionID-915796e0-4a8f-486a-9f63-afb1e191d24a)
Executing actions. Remaining 0/3 1.0s exec time total
Command: test. Finished 2 local
Time elapsed: 4:57.9s
Tests finished: Pass 2. Fail 0. Fatal 0. Skip 0. Build failure 0
### E2E
baseline
f691990503
proposal
Differential Revision: D71017436
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,912,610,253
|
Add `__all__` for `torch.utils.dlpack`
|
ringohoffman
|
closed
|
[
"triaged",
"open source",
"Merged",
"module: dlpack",
"ciflow/trunk",
"release notes: python_frontend",
"topic: not user facing"
] | 14
|
CONTRIBUTOR
|
Fixes the issue:
```python
torch.utils.dlpack.to_dlpack(tensor) # "to_dlpack" is not exported from module "torch.utils.dlpack" Pylance[reportPrivateImportUsage](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportPrivateImportUsage)
```
the docs for `torch.utils.dlpack`: https://pytorch.org/docs/stable/dlpack.html
| true
|
2,912,601,118
|
[inductor] Fix profiler tests with latest Triton
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149025
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,912,538,471
|
DISABLED test_wrap_kwarg_only_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_wrap_kwarg_only_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38598241962).
Over the past 3 hours, it has been determined flaky in 13 workflow(s) with 26 failures and 13 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_wrap_kwarg_only_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,912,492,523
|
[Inductor UT] Enable PYTORCH_TESTING_DEVICE_ONLY_FOR test case filter for test_torchinductor.py
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149023
The environ var PYTORCH_TESTING_DEVICE_ONLY_FOR controls the devices
in get_desired_device_type_test_bases, so we add RUN_CPU and RUN_GPU to
make sure cases are only enabled for devices specified for PYTORCH_TESTING_DEVICE_ONLY_FOR.
eg. Only enable GPU cases, not CPU cases even HAS_CPU.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,912,474,255
|
Support broadcast for nested tensors
|
shadow150519
|
closed
|
[
"triaged",
"module: nestedtensor"
] | 2
|
NONE
|
### 🚀 The feature, motivation and pitch
When working with variable-length sequence batches using NestedTensor, there is a common need to perform element-wise operations (e.g., scaling, weighting) where each sequence in the batch requires a unique tensor operation specific to that sequence. However, the current implementation of NestedTensor does not support broadcasting between two NestedTensors at the sequence level.
```python
import torch
seq1 = torch.rand(367,1024)
seq2 = torch.rand(1245,1024)
seq3 = torch.rand(156,1024)
nest_a = torch.nested.nested_tensor([a1, a2, a3], dtype=torch.float, device=device)
nest_e = torch.nested.nested_tensor([torch.rand(1,1024), torch.rand(1,1024), torch.rand(1,1024)], dtype=torch.float, device=device)
nest_a * nest_e
```
It reports the following error:
```python
---------------------------------------------------------------------------
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[28], line 6
4 nest_a = torch.nested.nested_tensor([seq1, seq2, seq3], dtype=torch.float, device=device)
5 nest_e = torch.nested.nested_tensor([torch.rand(1,1024), torch.rand(1,1024), torch.rand(1,1024)], dtype=torch.float, device=device)
----> 6 nest_a * nest_e
RuntimeError: mul does not support broadcasting when given a NestedTensor
```
So I think we can introduce sequence-level broadcasting support for NestedTensor operations where:
+ The outer dimension (batch size B) must match exactly.
+ Inner dimensions (e.g., sequence length T_i, features D) are broadcastable per sequence.
### Alternatives
_No response_
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
| true
|
2,912,401,282
|
[MPSInductor] Fix `argmin`/`argmax` long reductions
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149021
* #149020
* #149004
By adding an additional indexes array for aggregates and populating it when performing partial reductions.
And with that I can finally `torch.compile` TinyStories and get 600+ tokens/sec vs <200 on eager
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,912,401,222
|
[MPSInductor][EZ] Fix argmin/max signatures
|
malfet
|
closed
|
[
"topic: bug fixes",
"release notes: mps"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149021
* __->__ #149020
* #149004
threadgroup_argmin used to return input type, which is wrong, it should have returned `int` or `long`
Change signatures of both thredgroup_argmin and threadgroup_argmax to return int, as group size is small, no need to carry over large integeres
| true
|
2,912,387,552
|
Avoid oneDNN primitives when GradMode is enabled on avx2_vnni_2
|
CaoE
|
closed
|
[
"module: cpu",
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #148861.
oneDNN only supports bf16/f16 forward on the platform with avx2_vnni_2 by now. Add an additional check to avoid oneDNN primitive when GradMode is enabled on avx2_vnni_2.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,912,349,091
|
Add `nn.Bilinear` param validation
|
zeshengzong
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn"
] | 10
|
CONTRIBUTOR
|
Fixes #103425
## Changes
- Add doc description size value `must be > 0`
- Add validation for `in1_features` param
Currently, only `in1_features` will cause runtime error, if add checks for `in2_features` and `out_features` as well, might be kind of BC breaking.
```python
import torch
from torch import nn
class lenet(nn.Module):
def __init__(self):
super(lenet, self).__init__()
self.conv = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=5, stride=1)
# Error, `in1_features=1, in2_features=0, out_features=0` no error
self.linear = nn.Bilinear(in1_features=0, in2_features=0, out_features=0)
def forward(self, x):
# 1st block
x = self.conv(x)
x = self.linear(x)
return x
if __name__ == '__main__':
net = lenet()
```
## Test Result
```bash
pytest test/test_nn.py -k test_bilinear -vv
```


| true
|
2,912,310,209
|
[MPS] Enable angle and atan2 for `torch.long`
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
This check was added by https://github.com/pytorch/pytorch/pull/85817, that introduced no unit-tests and its content seems to be totally unrelated to title/subject of that PR. Anyway, right now it seems to be working fine on MacOS-13+
| true
|
2,912,263,450
|
DISABLED test_var_mean_tile_reduction_True_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_var_mean_tile_reduction_True_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38592796229).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 12 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_var_mean_tile_reduction_True_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 5370, in test_var_mean
self.common(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 631, in check_model_gpu
check_model(
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 513, in check_model
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 4 / 4 (100.0%)
Greatest absolute difference: 0.6583480834960938 at index (0, 2) (up to 1e-05 allowed)
Greatest relative difference: 0.8614732623100281 at index (0, 0) (up to 1.3e-06 allowed)
The failure occurred for item [2]
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_var_mean_tile_reduction_True_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,912,182,725
|
[cutlass backend] try make cutlass backend benchmark more robust
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149015
Differential Revision: [D71006269](https://our.internmc.facebook.com/intern/diff/D71006269/)
I want to make sure the benchmark even if failed on some experiment can still print most of the results.
```
Experiment group: mm (3x3, 3x3) torch.bfloat16
+-----------------------+-------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+-------------------+----------------------+---------------------+
| aten | 6.175220478326082 | 0.5982149520423263 | NA |
| triton | 5.326753947883844 | 3.2067150759976357 | -13.739858089605114 |
| triton_persistent_tma | 5.340870004147291 | 3.279932268196717 | -13.51126615004617 |
| cutlass_lvl_default | inf | inf | inf |
| cutlass_lvl_1111 | inf | inf | inf |
| cutlass_lvl_2222 | inf | inf | inf |
| cutlass_lvl_3333 | inf | inf | inf |
+-----------------------+-------------------+----------------------+---------------------+
```
| true
|
2,912,180,129
|
[ca] don't inline accumulate grad op
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149229
* __->__ #149014
* #149064
we use dummy tensors in our initial trace, so we should never inline. the subclass dispatch might not support the dummy tensor, e.g. DTensor accumulate grad will check that both param and grad are DTensors
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,912,170,299
|
Unexpected out-of-boundary behavior in `grid_sample`
|
turtleizzy
|
open
|
[
"module: nn",
"triaged",
"module: numpy",
"module: edge cases"
] | 3
|
NONE
|
### 🐛 Describe the bug
`grid_sample` should return padding value (0) when grid coordinates are outside `[-1, 1]`, but it does not.
I spotted this problem when I couldn't replicate the result of `grid_sample` with other libraries like `scipy.ndimage.map_coordinates` and `itk.DisplacementFieldTransform`. I experimented with labelmaps and the output segmentation from grid_sample was consistently larger than the output from other implementations.
```python
import torch
img = torch.ones([1, 1, 10, 10, 10])
grid = torch.ones([1, 1, 1, 10, 3]) + torch.linspace(-0.05, 0.05, 10)[None, None, None, :, None]
print(grid)
# tensor([[[[[0.9500, 0.9500, 0.9500],
# [0.9611, 0.9611, 0.9611],
# [0.9722, 0.9722, 0.9722],
# [0.9833, 0.9833, 0.9833],
# [0.9944, 0.9944, 0.9944],
# [1.0056, 1.0056, 1.0056],
# [1.0167, 1.0167, 1.0167],
# [1.0278, 1.0278, 1.0278],
# [1.0389, 1.0389, 1.0389],
# [1.0500, 1.0500, 1.0500]]]]])
print(torch.nn.functional.grid_sample(img, grid, align_corners=True, mode='nearest', padding_mode='zeros'))
# tensor([[[[[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]]]])
```
### Versions
PyTorch version: 2.1.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.3 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.27
Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0] (64-bit runtime)
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @rgommers
| true
|
2,912,143,601
|
[ez] Flush trymerge print statements
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Logs of trymerge don't match up with timestamps, ex
https://github.com/pytorch/pytorch/actions/runs/13766246347/job/38493307591
Ex:
```
2025-03-10T14:20:41.4899509Z Attempting merge of https://github.com/pytorch/pytorch/pull/148648 (0.003460856278737386 minutes elapsed)
...
2025-03-10T14:20:41.4907867Z Merge of https://github.com/pytorch/pytorch/pull/148648 failed due to: Still waiting for 16 jobs to finish, first few of them are: Check Labels / Check labels, trunk / macos-py3-arm64 / build, trunk / win-vs2022-cpu-py3 / build, trunk / cuda12.4-py3.10-gcc9-sm80 / build, trunk / win-vs2022-cuda12.6-py3 / build. Retrying in 5 min
2025-03-10T14:20:41.4909772Z Attempting merge of https://github.com/pytorch/pytorch/pull/148648 (5.280085611343384 minutes elapsed)
...
2025-03-10T14:20:41.4916812Z Merge of https://github.com/pytorch/pytorch/pull/148648 failed due to: Still waiting for 15 jobs to finish, first few of them are: trunk / macos-py3-arm64 / build, trunk / win-vs2022-cpu-py3 / build, trunk / cuda12.4-py3.10-gcc9-sm80 / build, trunk / win-vs2022-cuda12.6-py3 / build, trunk / linux-focal-cuda12.6-py3.10-gcc11-no-ops / build. Retrying in 5 min
2025-03-10T14:20:41.4918183Z Attempting merge of https://github.com/pytorch/pytorch/pull/148648 (10.590279157956441 minutes elapsed)
```
Either buffering prints or github actions logs are being weird?
Print with flush to see if it helps
| true
|
2,912,109,914
|
Fix issue #149006: Added docstring for backward()
|
Jason1ien
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Added a clear docstring for the backward() function to enhance readability of the code.
Fixes #149006
| true
|
2,912,108,093
|
[XFORMERS] torch._dynamo.exc.Unsupported
|
bhack
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
exporting/compiling Meta research xformers `memory_effiecient_attention` has an issue with dispatch
https://github.com/facebookresearch/xformers/blob/52f96c05723e9b79c88f25a4c406816ef2348a10/xformers/ops/fmha/dispatch.py#L70
### Error logs
```python
torch._dynamo.exc.Unsupported: SKIPPED INLINING <code object indent at 0x7e331b373750, file "/opt/conda/lib/python3.11/textwrap.py", line 470>:
------
x = xops.memory_efficient_attention(q, k, v, attn_bias=self.bias)
File "/opt/conda/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 306, in memory_efficient_attention
return _memory_efficient_attention(
File "/opt/conda/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 467, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/opt/conda/lib/python3.11/site-packages/xformers/ops/fmha/__init__.py", line 486, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "/opt/conda/lib/python3.11/site-packages/xformers/ops/fmha/dispatch.py", line 135, in _dispatch_fw
return _run_priority_list(
File "/opt/conda/lib/python3.11/site-packages/xformers/ops/fmha/dispatch.py", line 70, in _run_priority_list
{textwrap.indent(_format_inputs_description(inp), ' ')}"""
```
### Versions
nightly
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,912,105,542
|
[cutlass backend] switch layout for cutlass backend benchmark
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149009
```
python benchmarks/inductor_backends/cutlass.py
```
logs:
```
Experiment group: mm (1024x1024, 1024x1024) torch.float16
+-----------------------+--------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+---------------------+
| aten | 13.059554621577263 | 1.580178506206721 | NA |
| triton | 10.245470330119133 | 0.04118620231747627 | -21.54808776410064 |
| triton_persistent_tma | 10.388538241386414 | 0.04225084185600281 | -20.45258400908819 |
| cutlass_lvl_default | 12.882896699011326 | 231.14990583620965 | -1.3527101626732294 |
| cutlass_lvl_1111 | 11.362981051206589 | 126.41650272067636 | -12.99105229490415 |
| cutlass_lvl_2222 | 11.107578873634338 | 555.8380545829423 | -14.946725248331441 |
+-----------------------+--------------------+----------------------+---------------------+
Experiment group: mm (1024x1024, 1024x1024) torch.bfloat16
+-----------------------+--------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+---------------------+
| aten | 14.037585817277431 | 0.21587548777461052 | NA |
| triton | 10.571777820587158 | 78.15654796129093 | -24.68948750735019 |
| triton_persistent_tma | 10.761583223938942 | 1.3195342738181353 | -23.337364672110443 |
| cutlass_lvl_default | 12.872588820755482 | 237.0100042372942 | -8.299126443010406 |
| cutlass_lvl_1111 | 11.08622644096613 | 137.55013868492097 | -21.02469338195443 |
| cutlass_lvl_2222 | 11.044904589653015 | 551.265836935956 | -21.319059178545007 |
+-----------------------+--------------------+----------------------+---------------------+
Experiment group: mm (2048x2048, 2048x2048) torch.float16
+-----------------------+--------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+---------------------+
| aten | 30.483894050121307 | 0.27990864124149084 | NA |
| triton | 29.567627236247063 | 99.87172158574685 | -3.005740711366232 |
| triton_persistent_tma | 29.66325916349888 | 1.3695051120594144 | -2.692027748401006 |
| cutlass_lvl_default | 29.82821688055992 | 72.61214569816366 | -2.150897022812533 |
| cutlass_lvl_1111 | 29.476772993803024 | 67.7428645719774 | -3.303780857728953 |
| cutlass_lvl_2222 | 30.113255605101585 | 233.84051702311262 | -1.2158500630212203 |
+-----------------------+--------------------+----------------------+---------------------+
Experiment group: mm (2048x2048, 2048x2048) torch.bfloat16
+-----------------------+--------------------+----------------------+---------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+---------------------+
| aten | 30.58255836367607 | 0.058386584743857384 | NA |
| triton | 29.799651354551315 | 100.18178300186992 | -2.559978795150901 |
| triton_persistent_tma | 29.362043365836143 | 1.534341821912676 | -3.990885861562106 |
| cutlass_lvl_default | 29.4346883893013 | 73.68858492700383 | -3.7533484305817093 |
| cutlass_lvl_1111 | 29.164200648665428 | 75.44329373072833 | -4.637799421958348 |
| cutlass_lvl_2222 | 29.13798950612545 | 227.33327346481383 | -4.7235056020244 |
+-----------------------+--------------------+----------------------+---------------------+
Experiment group: mm (8192x8192, 8192x8192) torch.float16
+-----------------------+--------------------+----------------------+--------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+--------------------+
| aten | 1656.6237211227417 | 0.0549461180344224 | NA |
| triton | 1892.8285837173462 | 2.3174119112081826 | 14.258208401997386 |
| triton_persistent_tma | 1665.332317352295 | 2.7922237082384527 | 0.525683419747917 |
| cutlass_lvl_default | 1705.5492401123047 | 108.31571159465238 | 2.9533272019312116 |
| cutlass_lvl_1111 | 1714.9059772491455 | 17.64627545280382 | 3.518134829489478 |
| cutlass_lvl_2222 | 1680.4152727127075 | 306.9972395859659 | 1.4361469829637354 |
+-----------------------+--------------------+----------------------+--------------------+
Experiment group: mm (8192x8192, 8192x8192) torch.bfloat16
+-----------------------+--------------------+----------------------+--------------------+
| name | forward_time (us) | compilation_time (s) | perf_over_aten (%) |
+-----------------------+--------------------+----------------------+--------------------+
| aten | 1621.416687965393 | 0.06300561130046844 | NA |
| triton | 1782.3902368545532 | 2.318530729971826 | 9.927956834535548 |
| triton_persistent_tma | 1586.0934257507324 | 2.7931175641715527 | -2.178543151605614 |
| cutlass_lvl_default | 1657.4617624282837 | 43.31810224894434 | 2.2230605328307784 |
| cutlass_lvl_1111 | 1641.5367126464844 | 17.648567833006382 | 1.2408916739557292 |
| cutlass_lvl_2222 | 1645.8417177200317 | 249.33647010894492 | 1.5064005407078918 |
+-----------------------+--------------------+----------------------+--------------------+
```
| true
|
2,912,087,583
|
[AOTI][Debug logger] Min value: Error: "min_all_cuda" not implemented for 'Float8_e4m3fn'
|
henrylhtsang
|
open
|
[
"triaged",
"module: float8",
"module: aotinductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Problem is with AOTI intermediate debug logger with FP8.
repro:
```
import torch
import torch._inductor.config as config
config.aot_inductor.debug_intermediate_value_printer = "2"
config.aot_inductor.filtered_kernel_names = "triton_poi_fused__to_copy_add_0"
class Model(torch.nn.Module):
def forward(self, x):
x = x.to(torch.float)
return x + 1
model = Model().cuda()
x = torch.randn(10).cuda().to(torch.float8_e4m3fn)
ep = torch.export.export(model, (x,))
path = torch._inductor.aoti_compile_and_package(ep)
aot_model = torch._inductor.aoti_load_package(path)
aot_model(x)
print("done")
```
logs:
```
[ CUDAFloat8_e4m3fnType{10} ]
Number of elements: 10
Dtype: c10::Float8_e4m3fn
Mean value: -0.124023
Min value: Error: "min_all_cuda" not implemented for 'Float8_e4m3fn'
```
### Versions
trunk
cc @yanbing-j @vkuzo @albanD @kadeng @penguinwu @desertfire @chenyang78 @yushangdi @benjaminglass1 @chauhang @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,912,076,400
|
[AOTI][debug logger] small fix for intermediate value debugger for jit when arg is not tensor
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #149007
repro:
```
import torch
import torch._inductor.config as config
config.aot_inductor.debug_intermediate_value_printer = "2"
config.aot_inductor.filtered_kernel_names = "triton_poi_fused__to_copy_add_0"
class Model(torch.nn.Module):
def forward(self, x):
x = x.to(torch.float)
return x + 1
model = Model().cuda()
x = torch.randn(10).cuda().to(torch.float8_e4m3fn)
_ = torch.compile(model, fullgraph=True)(x)
print("done")
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,912,059,086
|
Missing Additional documentation in autograd.py
|
Jason1ien
|
closed
|
[
"module: docs",
"triaged",
"module: library",
"oncall: pt2",
"module: pt2-dispatcher"
] | 3
|
NONE
|
Some of the functions within autograd.py are missing some docstrings.
Specifically, the backward() function is missing a docstring.
Below is the link to the file:
https://github.com/pytorch/pytorch/blob/main/torch/_library/autograd.py
My systems specs:
Windows 11 Home
Intel 11th Gen Core i7-11800H @ 2.30GHz
Nvidia RTX 3050 @ 45 Watts
cc @svekars @sekyondaMeta @AlannaBurke @chauhang @penguinwu @zou3519 @bdhirsh @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @anjali411
| true
|
2,912,053,466
|
flex_attention without CUDA
|
jjh42
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Flex attention is great. But if a model is implement with flex attention it can only run on a CUDA device.
This (baby step) proposal is to implement a pure python function
flex_attention.create_full_mask()
which will accept the same parameters as create_block_mask but return a dense tensor that can be passed as a mask to scaled_dot_product_attention
In this way you can e.g. train on a CUDA device but perform inference on e.g. CPU with less code changes.
I don't propose to try and get the speed benefits of flex_attention just code that behave identically.
This function can also be useful for visualizing masks.
### Alternatives
- We could consider writing a flex_attention function that takes the block mask and computes flex attention on CPU / MPS etc. I think this will likely be slower since scaled_dot_product_attention is fairly optimized on several platforms now.
### Additional context
_No response_
| true
|
2,912,043,919
|
[MPSInductor] Fix `min`/`max` reductions over large dims
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149021
* #149020
* __->__ #149004
Simple followup after sum/prod
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,912,017,990
|
[test] bigger runnner
|
clee2000
|
open
|
[
"ciflow/trunk",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Now that I actually use the new version of the calculate docker image action, I feel like I should have done it differently...
| true
|
2,912,014,737
|
[inductor] nan_asserts doesn't work for FP8, "RuntimeError: "isinf" not implemented for 'Float8_e4m3fn'"
|
henrylhtsang
|
open
|
[
"triaged",
"module: inductor",
"module: float8"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
repro:
```
import torch
import torch._inductor.config as config
config.nan_asserts = True
class Model(torch.nn.Module):
def forward(self, x):
return x.half() + 1
model = Model().cuda()
x = torch.randn(10).cuda().to(torch.float8_e4m3fn)
_ = torch.compile(model, fullgraph=True)(x)
print("done")
```
logs:
```
File "/home/henrylhtsang/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 495, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/henrylhtsang/pytorch/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/henrylhtsang/pytorch/torch/_inductor/utils.py", line 2397, in run
return model(new_inputs)
^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_henrylhtsang/6t/c6tjaevpzkaafbwl4rv7kbghqfncxogc5dkfbssmyqsp6nh7saot.py", line 92, in call
assert not arg0_1.isinf().any().item()
^^^^^^^^^^^^^^
RuntimeError: "isinf" not implemented for 'Float8_e4m3fn'
```
### Versions
trunk
cc @chauhang @penguinwu @yanbing-j @vkuzo @albanD @kadeng @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
2,912,012,314
|
Explicitly set use-ephemeral runners for windows nightly cpu test jobs
|
atalman
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
This PR migrated windows builds to use ephemeral runners: https://github.com/pytorch/pytorch/pull/134463 however missed test jobs.
Explicitly set use-ephemeral runners for windows nightly cpu tests.
Please note we should be using already ephemeral runners for these after: https://github.com/pytorch/test-infra/pull/6377 (recently migrated)
| true
|
2,911,947,066
|
DISABLED test_wrap_kwarg_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_wrap_kwarg_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38582441675).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 14 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_wrap_kwarg_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,911,921,877
|
test diff
|
c00w
|
closed
|
[] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148999
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
| true
|
2,911,916,127
|
Denote a table of type conversions through StableIValue
|
janeyx99
|
closed
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148998
| true
|
2,911,911,110
|
[dynamo][invoke_subgraph] Faster aliasing checks
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148997
* #148953
* #149072
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,911,909,996
|
[codemod][lowrisk] Fix deprecated use of 0/NULL in caffe2/aten/src/ATen/native/quantized/cpu/qnnpack/src/fc-unpack.cc + 1
|
r-barnes
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: cpp",
"topic: improvements",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary:
`nullptr` is typesafe. `0` and `NULL` are not. In the future, only `nullptr` will be allowed.
This diff helps us embrace the future _now_ in service of enabling `-Wzero-as-null-pointer-constant`.
Test Plan: Sandcastle
Reviewed By: dtolnay
Differential Revision: D70939306
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,911,887,859
|
[test] for https://github.com/pytorch/pytorch/pull/147994/files
|
clee2000
|
closed
|
[
"topic: not user facing",
"ciflow/xpu"
] | 1
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
| true
|
2,911,885,905
|
[PGNCCL] Stash tensors for reduce_scatter_v and all_gather_v
|
kwen2501
|
closed
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148994
https://github.com/pytorch/pytorch/pull/148590 removed `record_stream`. Since previous `AVOID_RECORD` flag does not cover `reduce_scatter_v` and `all_gather_v` which are in coalescing form, these two ops were missed. Causing TorchRec's Variable Length Embedding to fail.
This PR adds a vector to stash tensors when coalescing is in flight. And the end of coalescing, it will hand over the tensors to `Work`.
Rest of the PR is mostly BE -- grouping various variables related to coalescing in one single struct and offer a `reset` method. So that it is easier to extend what we need to temporarily bookkeep.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [D71589949](https://our.internmc.facebook.com/intern/diff/D71589949)
| true
|
2,911,871,875
|
skip torchbind in cosntant folding
|
yushangdi
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: aotinductor"
] | 5
|
CONTRIBUTOR
|
Summary:
Do not fold torchbind objects in constant folding
Any operation on these torchbind objects can have arbitrary side effects, so we can't effectively constant fold anything torchbind-obj-related anyway.
Test Plan:
```
buck run fbcode//mode/dev-nosan //caffe2/test/inductor:torchbind -- -r aot_compile_constant_folding
```
Reviewed By: angelayi
Differential Revision: D69946541
cc @desertfire @chenyang78 @penguinwu @benjaminglass1 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,911,832,517
|
[TD] test_cpp_extensions_aot_ninja corresponds to things in test/cpp_extensions
|
clee2000
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Manually map test_cpp_extensions_aot_ninja to files in test/cpp_extensions since test_cpp_extensions_aot_ninja isn't an actual file you can edit, but a wrapper for files in test/cpp_extensions.
Idk if this is a good idea, feels very manual. Maybe it would be better to classify this the same as any other TD failure where TD simply can't figure out the tests it needs to run
| true
|
2,911,827,358
|
Fix score_mod.py dynamic max autotune
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148991
python benchmarks/transformer/score_mod.py --dynamic --max-autotune
previously would crash with
```
"/home/bobren/local/a/pytorch/torch/_inductor/select_algorithm.py", line 2306, in key_of
node.get_device().type,
```
but with this change no longer does
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,911,825,074
|
Update VS references in README.md
|
botmethere
|
closed
|
[
"topic: not user facing"
] | 4
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,911,803,153
|
[CI] Update crossvit_9_240 as pass
|
desertfire
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148989
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,911,799,892
|
second derivative of scaled_dot_product_attention does not work for nested tensors
|
mahyarkoy
|
closed
|
[
"module: autograd",
"triaged",
"module: nestedtensor",
"actionable"
] | 2
|
NONE
|
### 🐛 Describe the bug
Trying to compute the gradient of scaled_do_product_attention of nested tensor using create_graph=True fails, the code below recreates the issue:
```python
import torch
import torch.nn.functional as F
t1 = torch.arange(20).float().reshape(5,4)
n1 = torch.nested.as_nested_tensor([t1[:2], t1[2:5]], layout=torch.jagged)
t2 = t1 * 10
n2 = torch.nested.as_nested_tensor([t2[:1], t2[1:5]], layout=torch.jagged)
n1g = n1.clone().detach().requires_grad_()
tensor = F.scaled_dot_product_attention(query=n1g.unsqueeze(2).transpose(1,2), key=n2.unsqueeze(2).transpose(1,2), value=n2.unsqueeze(2).transpose(1,2))
loss = tensor.values().sum()
### RuntimeError: The function '_nested_view_from_jagged' is not differentiable with respect to argument 'min_seqlen'. This input cannot have requires_grad True.
grad = torch.autograd.grad(loss, n1g, create_graph=True)[0]
### Works
grad = torch.autograd.grad(loss, n1g, create_graph=False)[0]
```
### Versions
torch 2.6.0.dev20240925+cu121
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @cpuhrsch @jbschlosser @bhosmer @drisspg @davidberard98 @YuqingJ
| true
|
2,911,645,939
|
test/dynamo/test_utils: Fix one broken test on different python versions
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148987
We correctly handed different python version in the explicit ir_nodes test, but
didn't handle it in the dynamo_timed test. Just explicitly deleting the fields
there so the dynamo_timed test passes on all python versions.
(I noticed it breaking on 3.13).
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,911,629,444
|
[AMD] Various fixes for mem efficient attention on CK backend
|
xw285cornell
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
CONTRIBUTOR
|
Summary: Decouple aotriton vs. ck for mem efficient attention. Also fixed HW check.
Reviewed By: henryhu6
Differential Revision: D70872677
| true
|
2,911,603,753
|
security test for reopened PR
|
hashupdatebot
|
closed
|
[
"open source",
"topic: not user facing"
] | 4
|
NONE
|
Fixes #ISSUE_NUMBER
| true
|
2,911,590,841
|
[Sync file] the new file is not sync properly between pytorch/pytorch and pytorch/benchmark
|
yangw-dev
|
open
|
[
"oncall: releng",
"triaged"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
the timm_model.yml file created in pytorch/pytorch PR does not sync in pytorch/benchmark
## Issue Details
- pr submitted in pytorch:https://github.com/pytorch/pytorch/commit/e02c038a237483e70fa3541b0ade5d0d1c13165c
- the sync pr (by robot) in pytorch/benchmark (missing the new yaml file)
https://github.com/pytorch/benchmark/commit/d9cc213cbe99dc1ee3f837403afdee31aa378e8b
CodeSource:
https://github.com/pytorch/benchmark/blob/main/userbenchmark/dynamo/dynamobench/timm_models.py#L223
This makes the ao timm job failed WITH NOT FOUND File error
https://github.com/pytorch/benchmark/actions/runs/13771572330/job/38511203731
error:
FileNotFoundError: [Errno 2] No such file or directory: '/home/charlie/_work/benchmark/benchmark/benchmark/userbenchmark/dynamo/dynamobench/timm_models.yaml'
| true
|
2,911,536,414
|
inference_mode Tensors do not always need to be guarded on
|
zou3519
|
open
|
[
"triaged",
"vllm-compile",
"dynamo-triage-jan2025"
] | 0
|
CONTRIBUTOR
|
the following triggers a recompile
```
with torch.inference_mode():
x = torch.randn(3)
y = torch.randn(3)
@torch.compile(backend="eager", fullgraph=True)
def f(x):
return x.sin()
f(x)
f(y)
```
We saw this in vLLM
| true
|
2,911,516,646
|
[ROCm][TunableOp] Unit test for TunableOp BLAS logging.
|
naromero77amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
Add unit test for new TunableOp BLAS logging feature.
Requires this PR to be merged in first: https://github.com/pytorch/pytorch/pull/148979
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,911,499,374
|
[Inductor] Record Triton’s Base32 Cache Key in .best_config for Debugging
|
fulvius31
|
open
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"ci-no-td"
] | 42
|
CONTRIBUTOR
|
This is a follow-up PR of the reverted one https://github.com/pytorch/pytorch/pull/147019 :
Modified TorchInductor’s autotuning flow so that each best_config JSON file also includes the Triton “base32” (or base64) cache key.
Motivation
Debugging & Analysis: With this change, we can quickly identify which compiled binary and IRs belongs to a given best config.
The impact is minimal since it is only an extra field in .best_config. It can help advanced performance tuning or kernel-level debugging.
Also, since Triton already stores cubin/hsaco in its cache, developers/researchers can avoid to set store_cubin = True since they can get the cubin/hsaco in the Triton cache and with the code provided in this PR, they can easily match the best_config with the right Triton cache directory for the "best" kernel.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @davidberard98 @clee2000 @eellison @masnesral
| true
|
2,911,467,641
|
torch.export.export used to work with scan in 1/2025
|
xadupre
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 5
|
COLLABORATOR
|
### 🐛 Describe the bug
The following example used to work in January 2025. Now, it says ``torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'UserFunctionVariable' object has no attribute 'keywords'``.
```python
import scipy.spatial.distance as spd
import torch
class ModuleWithControlFlowLoop(torch.nn.Module):
def forward(self, x, y):
dist = torch.empty((x.shape[0], y.shape[0]), dtype=x.dtype)
for i in range(x.shape[0]):
sub = y - x[i : i + 1]
d = torch.sqrt((sub * sub).sum(axis=1))
dist[i, :] = d
return dist
model = ModuleWithControlFlowLoop()
x = torch.randn(3, 4)
y = torch.randn(5, 4)
pwd = spd.cdist(x.numpy(), y.numpy())
expected = torch.from_numpy(pwd)
print(f"shape={pwd.shape}, discrepancies={torch.abs(expected - model(x,y)).max()}")
# %%
# :func:`torch.export.export` works because it unrolls the loop.
# It works if the input size never change.
ep = torch.export.export(model, (x, y))
print(ep.graph)
# %%
# However, with dynamic shapes, that's another story.
x_rows = torch.export.Dim("x_rows")
y_rows = torch.export.Dim("y_rows")
dim = torch.export.Dim("dim")
try:
ep = torch.export.export(
model, (x, y), dynamic_shapes={"x": {0: x_rows, 1: dim}, "y": {0: y_rows, 1: dim}}
)
print(ep.graph)
except Exception as e:
print(e)
# %%
# Suggested Patch
# +++++++++++++++
#
# We need to rewrite the module with function
# :func:`torch.ops.higher_order.scan`.
def dist(y: torch.Tensor, scanned_x: torch.Tensor):
sub = y - scanned_x.reshape((1, -1))
sq = sub * sub
rd = torch.sqrt(sq.sum(axis=1))
# clone --> UnsupportedAliasMutationException:
# Combine_fn might be aliasing the input!
return [y.clone(), rd]
class ModuleWithControlFlowLoopScan(torch.nn.Module):
def forward(self, x, y):
carry, out = torch.ops.higher_order.scan(dist, [y], [x], additional_inputs=[])
return out
model = ModuleWithControlFlowLoopScan()
model_output = model(x, y)
print(f"shape={pwd.shape}, discrepancies={torch.abs(expected - model_output).max()}")
# %%
# That works. Let's export again.
ep = torch.export.export(
model, (x, y), dynamic_shapes={"x": {0: x_rows, 1: dim}, "y": {0: y_rows, 1: dim}}
)
print(ep.graph)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250311+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] model-explorer-onnx==0.3.4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-array-api==0.3.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-genai-cuda==0.6.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] onnxscript==0.3.0.dev20250301
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250311+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250311+cu126
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.22.0.dev20250311+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,911,464,558
|
[ROCm][TunableOp] Fix TunableOp BLAS logging for online tuning case.
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
In a previous PR https://github.com/pytorch/pytorch/pull/147034, there was a bad merge at the last minute.
BLAS logging works for offline tuning, but does not currently work for online tuning.
This PR fixes BLAS logging for online tuning.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,911,456,802
|
[ez] include config as part of __all__ in torch.compiler
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148991
* __->__ #148978
Right now we are susceptive to a race condition where if the torch.compiler.config is not implicitly import via dynamo/builder.py, we will throw an error when trying to set compiler configs. This fixes it by including config in `__all__`.
Previous
```
>>> import torch
>>> torch.compiler.config.dynamic_sources = "L['kwargs']['float_features']"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'torch.compiler' has no attribute 'config'
>>> torch.compiler.config.dynamic_sources =
"L['kwargs']['float_features']"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'torch.compiler' has no attribute 'config'
```
Now
```
>>> import torch
>>> torch.compiler.config.dynamic_sources = "L['kwargs']['float_features']"
```
| true
|
2,911,373,550
|
module.cuda() doesn't work under FakeTensorMode
|
bdhirsh
|
open
|
[
"module: nn",
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: pt2-dispatcher"
] | 5
|
CONTRIBUTOR
|
repro:
```
import torch
from torch._subclasses import FakeTensorMode
mode = FakeTensorMode()
with mode:
m = torch.nn.Linear(16, 16).cuda()
print(m.weight.device)
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @chauhang @penguinwu @eellison @zou3519
| true
|
2,911,373,064
|
`dist.barrier()` fails with TORCH_DISTRIBUTED_DEBUG=DETAIL and after dist.send/dist.recv calls
|
slitvinov
|
open
|
[
"oncall: distributed",
"triaged"
] | 3
|
NONE
|
This program
```sh
$ cat bug.py
import torch
import torch.distributed as dist
import torch.distributed.elastic.multiprocessing.errors
@dist.elastic.multiprocessing.errors.record
def main():
dist.init_process_group()
rank = dist.get_rank()
size = dist.get_world_size()
x = torch.tensor(0)
if rank == 0:
x = torch.tensor(123)
dist.send(x, 1)
elif rank == 1:
dist.recv(x, 0)
dist.barrier()
for i in range(size):
if rank == i:
print(f"{rank=} {size=} {x=}")
dist.barrier()
dist.destroy_process_group()
if __name__ == '__main__':
main()
```
Fails with
```
$ OMP_NUM_THREADS=1 TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --nproc-per-node 3 --standalone bug.py
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/lisergey/deepseek/bug.py", line 25, in <module>
[rank0]: main()
[rank0]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
[rank0]: return f(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/lisergey/deepseek/bug.py", line 16, in main
[rank0]: dist.barrier()
[rank0]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
[rank0]: work = group.barrier(opts=opts)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: Detected mismatch between collectives on ranks. Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER), but Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 1vs 0
[rank2]: Traceback (most recent call last):
[rank2]: File "/home/lisergey/deepseek/bug.py", line 25, in <module>
[rank2]: main()
[rank2]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
[rank2]: return f(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^
[rank2]: File "/home/lisergey/deepseek/bug.py", line 16, in main
[rank2]: dist.barrier()
[rank2]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank2]: return func(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
[rank2]: work = group.barrier(opts=opts)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: RuntimeError: Detected mismatch between collectives on ranks. Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 0vs 1
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/lisergey/deepseek/bug.py", line 25, in <module>
[rank1]: main()
[rank1]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
[rank1]: return f(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/lisergey/deepseek/bug.py", line 16, in main
[rank1]: dist.barrier()
[rank1]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
[rank1]: work = group.barrier(opts=opts)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: RuntimeError: Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER), but Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 1vs 0
E0311 18:33:20.716000 340050 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 340054) of binary: /usr/bin/python
E0311 18:33:20.729000 340050 torch/distributed/elastic/multiprocessing/errors/error_handler.py:141] no error file defined for parent, to copy child error file (/tmp/torchelastic_ih8xk_wi/5240416c-e326-45c8-a708-c3ec0cc3d51b_sfj1zute/attempt_0/0/error.json)
Traceback (most recent call last):
File "/home/lisergey/.local/bin/torchrun", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/run.py", line 918, in main
run(args)
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
bug.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2025-03-11_18:33:20
host : lenovo
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 340055)
error_file: /tmp/torchelastic_ih8xk_wi/5240416c-e326-45c8-a708-c3ec0cc3d51b_sfj1zute/attempt_0/1/error.json
traceback : Traceback (most recent call last):
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/lisergey/deepseek/bug.py", line 16, in main
dist.barrier()
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
work = group.barrier(opts=opts)
^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER), but Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 1vs 0
[2]:
time : 2025-03-11_18:33:20
host : lenovo
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 340056)
error_file: /tmp/torchelastic_ih8xk_wi/5240416c-e326-45c8-a708-c3ec0cc3d51b_sfj1zute/attempt_0/2/error.json
traceback : Traceback (most recent call last):
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/lisergey/deepseek/bug.py", line 16, in main
dist.barrier()
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
work = group.barrier(opts=opts)
^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Detected mismatch between collectives on ranks. Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 0vs 1
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-03-11_18:33:20
host : lenovo
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 340054)
error_file: /tmp/torchelastic_ih8xk_wi/5240416c-e326-45c8-a708-c3ec0cc3d51b_sfj1zute/attempt_0/0/error.json
traceback : Traceback (most recent call last):
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/lisergey/deepseek/bug.py", line 16, in main
dist.barrier()
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
work = group.barrier(opts=opts)
^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Detected mismatch between collectives on ranks. Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER), but Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 1vs 0
============================================================
```
It runs as expected with `--nproc-per-node 2`
```
$ OMP_NUM_THREADS=1 TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --nproc-per-node 2 --standalone bug.py
rank=0 size=2 x=tensor(123)
rank=1 size=2 x=tensor(123)
```
and with any `--nproc-per-node` if I don't set `TORCH_DISTRIBUTED_DEBUG=DETAIL`:
```
$ OMP_NUM_THREADS=1 torchrun --nproc-per-node 3 --standalone bug.py
rank=0 size=3 x=tensor(123)
rank=1 size=3 x=tensor(123)
rank=2 size=3 x=tensor(0)
```
It also works even with `TORCH_DISTRIBUTED_DEBUG=DETAIL` but without `dist.send()` and `dist.recv()` calls
```
$ cat bug.py
import torch
import torch.distributed as dist
import torch.distributed.elastic.multiprocessing.errors
@dist.elastic.multiprocessing.errors.record
def main():
dist.init_process_group()
rank = dist.get_rank()
size = dist.get_world_size()
x = torch.tensor(0)
dist.barrier()
for i in range(size):
if rank == i:
print(f"{rank=} {size=} {x=}")
dist.barrier()
dist.destroy_process_group()
if __name__ == '__main__':
main()
$ OMP_NUM_THREADS=1 TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --nproc-per-node 3 --standalone bug.py
rank=0 size=3 x=tensor(0)
rank=1 size=3 x=tensor(0)
rank=2 size=3 x=tensor(0)
```
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-19-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
CPU family: 6
Model: 140
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 43%
CPU max MHz: 4200.0000
CPU min MHz: 400.0000
BogoMIPS: 4838.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 5 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.14.0
[pip3] pytorch-forecasting==1.3.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.6.0
[pip3] torchmetrics==1.6.1
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,911,346,773
|
[MPSInductor] Fix large prod and sum reductions
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #149004
* __->__ #148975
After this change, if reduction dimension is larger than `max_threadgroup_size`, emit a `for` loop from `codegen_iteration_ranges_entry` and wrap it up in `codegen_body()`
I.e. after this changes following command
```
% TORCH_LOGS=output_code python -c "import torch;print(torch.compile(lambda x:(x[0::2].sin()+(x[1::2] + .4).cos()).sum(dim=0) - 3.14)(torch.rand(4096, device='mps')))" 2>&1|cut -c 86-
```
will emit following shader
```metal
#include <c10/metal/random.h>
#include <c10/metal/special_math.h>
#include <c10/metal/utils.h>
#include <c10/metal/reduction_utils.h>
kernel void generated_kernel(
device float* out_ptr1,
constant float* in_ptr0,
uint2 thread_pos [[thread_position_in_grid]],
uint2 group_pos [[thread_position_in_threadgroup]]
) {
auto xindex = thread_pos.x;
auto r0_index = thread_pos.y;
threadgroup float tmp_acc_0[1024];
tmp_acc_0[r0_index] = 0;
for(auto r0_0_cnt = 0; r0_0_cnt < 2; ++r0_0_cnt) {
int r0_0 = 2 * r0_index + r0_0_cnt;
if (r0_0 >= 2047) break;
auto tmp0 = in_ptr0[2*r0_0];
auto tmp2 = in_ptr0[1 + 2*r0_0];
auto tmp1 = metal::precise::sin(tmp0);
auto tmp3 = 0.4;
auto tmp4 = tmp2 + tmp3;
auto tmp5 = metal::precise::cos(tmp4);
auto tmp6 = tmp1 + tmp5;
tmp_acc_0[r0_index] += tmp6;
}
auto tmp7 = c10::metal::threadgroup_sum(tmp_acc_0, 1024);
auto tmp8 = 3.14;
auto tmp9 = tmp7 - tmp8;
out_ptr1[0] = static_cast<float>(tmp9);
}
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,911,285,563
|
Fix DCP link
|
H-Huang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: docs",
"topic: not user facing"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148974
| true
|
2,911,209,094
|
ConvTranspose1d on MKLDNN with BF32 yields wrong results on Intel Sapphire Rapids CPUs
|
Flamefire
|
closed
|
[
"module: mkldnn",
"module: intel"
] | 6
|
COLLABORATOR
|
### 🐛 Describe the bug
I see test failures in `test_conv_deconv_*d_lower_precision_cpu_bfloat16` on systems with Intel Sapphire Rapids. They are consistent with the same diff so fully reproducible.
I reduced the test to a minimal example:
```
import copy
import torch
from torch.utils import mkldnn as mkldnn_utils
import torch.testing
dtype=torch.bfloat16
torch.manual_seed(1234)
for _ in range(12):
N = torch.randint(1, 3, (1,)).item()
M = torch.randint(1, 3, (1,)).item()
C = torch.randint(1, 3, (1,)).item()
x_shape = (N, C) + (224,)
x = torch.randn(x_shape, dtype=torch.float32)
conv = torch.nn.ConvTranspose1d(in_channels=C,
out_channels=M,
kernel_size=3,
stride=2,
padding=1,
dilation=2,
bias=True,
groups=1).float()
x_lower = x.to(dtype=dtype)
mkldnn_conv = mkldnn_utils.to_mkldnn(copy.deepcopy(conv))
mkldnn_conv_lower = mkldnn_utils.to_mkldnn(copy.deepcopy(conv), dtype)
y = mkldnn_conv(x.to_mkldnn()).to_dense()
y_lower = mkldnn_conv_lower(x_lower.to_mkldnn()).to_dense(torch.float32)
torch.testing.assert_close(y, y_lower, atol=1e-1, rtol=1e-3)
```
The output is:
```
Traceback (most recent call last):
File "/home/alex/test_mkldnn.py", line 27, in <module>
torch.testing.assert_close(y, y_lower, atol=1e-1, rtol=1e-3)
File "/dev/shm/venv/lib/python3.11/site-packages/torch/testing/_comparison.py", line 1530, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 898 / 1796 (50.0%)
Greatest absolute difference: 0.3786679804325104 at index (1, 0, 267) (up to 0.1 allowed)
Greatest relative difference: inf at index (0, 0, 0) (up to 0.001 allowed)
```
The loop is required because it seemingly doesn't affect all inputs so one has to hit the "right" random input. That happens consistently in the actual test due to running the code in all combination of multiple parameters.
This is happening since PyTorch 2.2. Version 2.1 is not affected
<details>
<summary>### Versions</summary
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.9 (Ootpa) (x86_64)
GCC version: (GCC) 13.2.0
Clang version: Could not collect
CMake version: version 3.27.6
Libc version: glibc-2.28
Python version: 3.11.5 (main, Nov 6 2023, 12:05:40) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
Adressgrößen: 46 bits physical, 57 bits virtual
Byte-Reihenfolge: Little Endian
CPU(s): 208
Liste der Online-CPU(s): 0-207
Anbieterkennung: GenuineIntel
Modellname: Intel(R) Xeon(R) Platinum 8470
Prozessorfamilie: 6
Modell: 143
Thread(s) pro Kern: 2
Kern(e) pro Sockel: 52
Sockel: 2
Stepping: 8
Übertaktung: aktiviert
Skalierung der CPU(s): 189%
Maximale Taktfrequenz der CPU: 2001,0000
Minimale Taktfrequenz der CPU: 800,0000
BogoMIPS: 4000,00
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualisierung: VT-x
L1d Cache: 4,9 MiB (104 Instanzen)
L1i Cache: 3,3 MiB (104 Instanzen)
L2 Cache: 208 MiB (104 Instanzen)
L3 Cache: 210 MiB (2 Instanzen)
NUMA-Knoten: 8
NUMA-Knoten0 CPU(s): 0-12,104-116
NUMA-Knoten1 CPU(s): 13-25,117-129
NUMA-Knoten2 CPU(s): 26-38,130-142
NUMA-Knoten3 CPU(s): 39-51,143-155
NUMA-Knoten4 CPU(s): 52-64,156-168
NUMA-Knoten5 CPU(s): 65-77,169-181
NUMA-Knoten6 CPU(s): 78-90,182-194
NUMA-Knoten7 CPU(s): 91-103,195-207
Schwachstelle Gather data sampling: Not affected
Schwachstelle Itlb multihit: Not affected
Schwachstelle L1tf: Not affected
Schwachstelle Mds: Not affected
Schwachstelle Meltdown: Not affected
Schwachstelle Mmio stale data: Not affected
Schwachstelle Retbleed: Not affected
Schwachstelle Spec rstack overflow: Not affected
Schwachstelle Spec store bypass: Vulnerable
Schwachstelle Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Schwachstelle Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Schwachstelle Srbds: Not affected
Schwachstelle Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.6.0
[pip3] triton==3.2.0
</details>
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @frank-wei
| true
|
2,911,144,418
|
[MPS] Make `torch.mps.compile_shader` public
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
It was a private method in 2.6, but nothing changes in its APIs for 2.7
and it will likely remain the same in 2.8, so time to remove underscore from its name
This allows one to author/invoke shaders directly from PyTorch, for example code below implements an increment by thread index:
```python
```python
import torch
x = torch.ones(10, device="mps")
m = torch.mps.compile_shader("""
kernel void foo(device float* x, uint idx [[thread_position_in_grid]]) {
x[idx] += idx;
}
")
m.foo(x)
```
| true
|
2,911,130,785
|
[release] Move triton pin to latest triton release/3.3.x
|
atalman
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 3
|
CONTRIBUTOR
|
This branch contains latest AMD cherry-picks:
https://github.com/triton-lang/triton/pull/6171
https://github.com/triton-lang/triton/pull/6165
cc @jeffdaily @jataylo @jithunnair-amd
| true
|
2,911,112,520
|
ONNX export drops namespace qualifier for custom operation
|
borisfom
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged",
"onnx-needs-info"
] | 5
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Here, a repro modified from the example used on Pytorch doc page for custom ONNX ops.
I expect saved ONNX file to have com.microsoft::Gelu node - OnnxProgram seem to have the qualifier, but it's lost when file is saved:
```
import torch
import onnxscript
import onnx
class GeluModel(torch.nn.Module):
def forward(self, input_x):
return torch.ops.aten.gelu(input_x)
microsoft_op = onnxscript.values.Opset(domain="com.microsoft", version=1)
from onnxscript import FLOAT
@onnxscript.script(microsoft_op)
def custom_aten_gelu(self: FLOAT, approximate: str = "none") -> FLOAT:
return microsoft_op.Gelu(self)
x = torch.tensor([1.0])
onnx_program = torch.onnx.export(
GeluModel().eval(),
(x,),
dynamo=True,
custom_translation_table={
torch.ops.aten.gelu.default: custom_aten_gelu,
},
)
onnx_program.optimize()
print(onnx_program.model)
onnx_file_path="ms.onnx"
print("==============")
onnx_program.save(onnx_file_path)
onnx_model = onnx.load(onnx_file_path)
print(onnx.helper.printable_graph(onnx_model.graph))
```
The output, note no qualifier in the second printout:
```
python ms.py
'Gelu' is not a known op in 'com.microsoft'
/git/onnxscript/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. P\
lease use '.op_signature' instead.
param_schemas = callee.param_schemas()
/git/onnxscript/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the\
future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
[torch.onnx] Obtain model graph for `GeluModel()` with `torch.export.export(..., strict=False)`...
/usr/local/lib/python3.12/dist-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The cur\
rent Torch version does not have Intel GPU Support. (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
[torch.onnx] Obtain model graph for `GeluModel()` with `torch.export.export(..., strict=False)`... ✅
[torch.onnx] Run decomposition...
/usr/local/lib/python3.12/dist-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The cur\
rent Torch version does not have Intel GPU Support. (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
[torch.onnx] Run decomposition... ✅
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... ✅
<
ir_version=10,
opset_imports={'pkg.onnxscript.torch_lib.common': 1, 'com.microsoft': 1, '': 18},
producer_name='pytorch',
producer_version='2.7.0.dev20250310+cu128',
domain=None,
model_version=None,
>
graph(
name=main_graph,
inputs=(
%"input_x"<FLOAT,[1]>
),
outputs=(
%"gelu"<FLOAT,[1]>
),
) {
0 | # n0
%"gelu"<FLOAT,[1]> ⬅️ com.microsoft::Gelu(%"input_x")
return %"gelu"<FLOAT,[1]>
}
==============
graph main_graph (
%input_x[FLOAT, 1]
) {
%gelu = Gelu(%input_x)
return %gelu
}
```
@justinchuby @xadupre @titaiwangms
### Versions
Pytorch nightly
| true
|
2,911,100,657
|
[MPSInductor] Prep for mutlistage reductions
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148975
* __->__ #148969
----
- Move reduction variable initialization from `loads` to `indexing_code`
- Move barriers from `codegen_kernel` to `reduction` and only use them for `any` reductions (as other reduction ops do barriers explicitly inside the respective reduction functions)
- Use `self.compute` instead of `self.body` for all compute operations
Checked that number of before/after failures stays at `164 failed, 616 passed, 53 skipped`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,911,091,183
|
Slow evaluation on Mac with custom-built library
|
matteosal
|
closed
|
[
"module: performance",
"triaged",
"module: macos",
"module: arm"
] | 4
|
NONE
|
I have built libtorch on Mac (Apple Silicon) with these settings
```
`# GENERAL` \
-DCMAKE_INSTALL_PREFIX=$output_dir \
-DCMAKE_BUILD_TYPE=Release \
`# PYTORCH SPECIFIC` \
-DBUILD_PYTHON=OFF \
-DUSE_NUMPY=OFF \
-DUSE_DISTRIBUTED=OFF `# distributed computing tools` \
-DUSE_FBGEMM=OFF `# quantized operators` \
-DATEN_NO_TEST=ON \
-DUSE_CUDA=OFF \
-DUSE_ROCM=OFF `# amd GPU support` \
-DUSE_XPU=OFF `# intel GPU support` \
-DUSE_KINETO=OFF `# profiling tools` \
`# OPENBLAS/OPENMP` \
-DBLAS="OpenBLAS" \
-DOpenBLAS_INCLUDE_DIR=$openblas_dir/include \
-DOpenBLAS_LIB=$openblas_dir/lib/libopenblas.a \
-DOpenMP_C_FLAGS="-I$omp_dir -Xpreprocessor -fopenmp" \
-DOpenMP_CXX_FLAGS="-I$omp_dir -Xpreprocessor -fopenmp" \
-DOpenMP_C_LIB_NAMES="libomp" \
-DOpenMP_CXX_LIB_NAMES="libomp" \
-DOpenMP_libomp_LIBRARY="$omp_dir/libomp.dylib" \
```
using an openblas built with these settings
```
make CORE=ARMv8 BINARY=64 INTERFACE64=0 CFLAGS="-DDTB_DEFAULT_ENTRIES=64 -O3 -mmacosx-version-min=12.0"
```
This is a standalone program that runs a convolution and times it:
```cpp
#include <torch/torch.h>
#include <iostream>
#include <chrono>
using namespace torch;
int main() {
Tensor input = torch::ones({10, 256, 128, 128});
Tensor w = torch::ones({256, 256, 3, 3});
Tensor b = torch::ones({256});
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
at::convolution(
input,
w,
b,
{1, 1}, // stride
{0, 0}, // padding
{1, 1}, // dilation
false, // transposed
{0, 0}, // out padding
1 // groups
);
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - begin).count() << "\n";
}
```
And I have built it with this CMake script (needs arguments `PyTorch_DIR` and `OMP_DIR`):
```
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(example-app)
list(APPEND CMAKE_PREFIX_PATH ${PyTorch_DIR})
find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_target_properties(example-app PROPERTIES
CXX_STANDARD 17
BUILD_RPATH "${OMP_DIR}"
INSTALL_RPATH "${INSTALL_RPATH};${OMP_DIR}"
)
```
Running the executable on an 8-core M1 Mac Mini gives about 1000ms of evaluation time, while on a less powerful Linux laptop (running a libtorch built with MKL) I get about 300ms.
On more complicated training examples I can see all cores spinning but it's still very slow, actually much worse than a 3x slowdown that this case is showing. I guess the problem is somewhere in my libtorch build settings, or maybe with linking to openblas at all? What's the recommended way to build for Apple Silicon?
cc @msaroufim @malfet @albanD @snadampal @milpuz01
| true
|
2,911,063,053
|
[ROCm] Fix TORCH_CHECK for hdim 512 support added in AOTriton 0.9b
|
xinyazhang
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 5
|
COLLABORATOR
|
Fixes #148850
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,911,032,106
|
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_bfloat16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 10
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38551199174).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 227, in test_parity
actual = func(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs_', keys=('aten::_foreach_abs_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1972, in wrap_fn
return fn(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 234, in test_parity
with self.assertRaises(type(e)):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: AssertionError not raised
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.bfloat16]], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_inplace_cuda_bfloat16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,911,031,933
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_bfloat16 (__main__.TestForeachCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | 7
|
NONE
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38542109683).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_bfloat16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
| true
|
2,910,886,541
|
Consistently use `testing.assert_close` in tests
|
Flamefire
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"open source",
"release notes: quantization",
"topic: not user facing",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: distributed (checkpoint)"
] | 2
|
COLLABORATOR
|
The error message for failing tests are much better.
The replacement was done using Search&Replace with a Regexp so should all be fine.
Edit: Looking around I'd say even `self.assertEqual` would work. Not sure when to use which
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,910,830,810
|
[BE]: Update CU128 cudnn to 9.8.0.87
|
Skylion007
|
closed
|
[
"open source",
"better-engineering",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Also cu12.6 is an on old CUDNN version, we may want to upgrade it for all the performance reasons as I don't see a manywheel linux reason to stay back on the old 9.5 release. I might split that into it's own PR. This one just updates CU126 to the latest and greatest.
| true
|
2,910,810,940
|
[AOTI][experiment] Turn on freezing as default
|
desertfire
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148962
Summary:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,910,437,241
|
DISABLED test_wrap_kwarg_default_if_branch_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 4
|
NONE
|
Platforms: linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_wrap_kwarg_default_if_branch_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38534750793).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_wrap_kwarg_default_if_branch_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 1683, in test_wrap_kwarg_default_if_branch
self._test_wrap_simple(f, default_args_generator((x, y)), arg_count)
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 7.
Absolute difference: 3
Relative difference: 0.75
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_wrap_kwarg_default_if_branch_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,910,395,496
|
Does torch compile affect results ?
|
christopher5106
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
### 🐛 Describe the bug
Taking the generic example from [Flux Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("img_original.png")
```
gives the following result:

while
```python
pipe.transformer = torch.compile(pipe.transformer)
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("img_compiled.png")
```
gives big differences:

Is it something expected ?
### Versions
pytorch-triton @ https://download.pytorch.org/whl/nightly/pytorch_triton-3.2.0%2Bgit4b3bb1f8-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl#sha256=c2844429f32820c9f7ed167c28ce7326f88e0a32f96f1119f4618dfd650d3993
torch @ https://download.pytorch.org/whl/nightly/cu124/torch-2.7.0.dev20250211%2Bcu124-cp311-cp311-manylinux_2_28_x86_64.whl#sha256=b0ea28d8d73443cce0a6124dce5923a65f3a35ede4c6624fda8fcc12af614aa3
torchaudio @ https://download.pytorch.org/whl/nightly/cu124/torchaudio-2.6.0.dev20250211%2Bcu124-cp311-cp311-linux_x86_64.whl#sha256=eb1293e3462d76cb82ddeca8c1f489aa6fc8131243310c2333b95b63aba24999
torchvision @ https://download.pytorch.org/whl/nightly/cu124/torchvision-0.22.0.dev20250211%2Bcu124-cp311-cp311-linux_x86_64.whl#sha256=052f7215fe0f99e551734f14bb86e7b2e652a9107684f06fe07dd534867d844f
diffusers @ git+https://github.com/huggingface/diffusers.git@37c9697f5bb8c96b155d24d5e7382d5215677a8f
NVIDIA H100 PCIe
Driver Version: 550.127.08
CUDA Version: 12.4
Python 3.11.11
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,910,172,042
|
Move token linter code into tools/linter/adaptors/_linter/
|
rec
|
open
|
[
"open source",
"topic: not user facing",
"suppress-bc-linter"
] | 5
|
COLLABORATOR
|
This is a pure refactoring - no executable code has changed.
This is preparatory to adding considerably more functionality to this small family of "token linters" (as I call them, because they tokenize Python programs and then use that to lint them).
I centralized the code that was previously hidden in the individual linters, and broken up the `_linter.py` which became fairly huge.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152256
* __->__ #148959
* #151906
| true
|
2,910,022,699
|
Fixing the pytorch profiler not working with `with_stack` flag set
|
arjun-choudhry
|
closed
|
[
"open source"
] | 6
|
NONE
|
Adding call to RecordCCall such that the PyCCall Events are inserted into the queue. This ensures that the profiling doesn't break with 'with_stack' flag set.
Fixes #136817 , #101632
| true
|
2,909,837,222
|
DISABLED test_graph_partition (__main__.TritonCodeGenTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 6
|
NONE
|
Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_graph_partition&suite=TritonCodeGenTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38540387818).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_graph_partition`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 14130, in test_graph_partition
).check("recursively_apply_fns = runner.recursively_apply_fns").run(
RuntimeError: Expected to find "(buf0, buf1) = self.partitions[0](partition0_args)" but did not find it
Searched string:
''', device_str='cuda')
def partition_0(args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (2, 2), (2, 1))
assert_size_stride(arg1_1, (2, 2), (2, 1))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((4, 4), (4, 1), torch.float32)
# Topologically Sorted Source Nodes: [z], Original ATen: [aten.mm]
stream0 = get_raw_stream(0)
triton_poi_fused_mm_0.run(arg0_1, buf0, 16, grid=grid(16), stream=stream0)
buf1 = empty_strided_cuda((4, 4), (4, 1), torch.float32)
# Topologically Sorted Source Nodes: [z], Original ATen: [aten.mm]
stream0 = get_raw_stream(0)
triton_poi_fused_mm_0.run(arg1_1, buf1, 16, grid=grid(16), stream=stream0)
buf2 = empty_strided_cuda((4, 4), (4, 1), torch.float32)
# Topologically Sorted Source Nodes: [z], Original ATen: [aten.mm]
extern_kernels.mm(buf0, buf1, out=buf2)
del buf0
del buf1
buf3 = empty_strided_cuda((2, 2), (2, 1), torch.float32)
# Topologically Sorted Source Nodes: [y1], Original ATen: [aten.add]
stream0 = get_raw_stream(0)
triton_poi_fused_add_1.run(arg1_1, buf3, 4, grid=grid(4), stream=stream0)
return (buf2, buf3, )
# kernel path: /tmp/tmpdu44q1au/me/cme444lh26frpd6imtv62lmrr64sunuatoluj3olnlpwupk4osvh.py
# Topologically Sorted Source Nodes: [x1, y1, add_3, add_4, add_5], Original ATen: [aten.add]
# Source node to ATen node mapping:
# add_3 => add_3
# add_4 => add_4
# add_5 => add_5
# x1 => add
# y1 => add_1
# Graph fragment:
# %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg0_1, 1), kwargs = {})
# %add_1 : [num_users=2] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg1_1, 1), kwargs = {})
# %add_3 : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%add, %add_1), kwargs = {})
# %add_4 : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%add_3, %slice_tensor_1), kwargs = {})
# %add_5 : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%add_4, %device_put_1), kwargs = {})
triton_poi_fused_add_3 = async_compile.triton('triton_poi_fused_add_3', '''
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
@triton_heuristics.pointwise(
size_hints={'x': 4},
filename=__file__,
triton_meta={'signature': {'in_out_ptr0': '*fp32', 'in_ptr0': '*fp32', 'in_ptr1': '*fp32', 'in_ptr2': '*fp32', 'xnumel': 'i32'}, 'device': DeviceProperties(type='hip', index=0, multi_processor_count=304, cc='gfx942', major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=64), 'constants': {}, 'configs': [AttrsDescriptor.from_dict({'arg_properties': {'tt.divisibility': (0, 1, 2, 3), 'tt.equal_to': ()}, 'cls': 'AttrsDescriptor'})]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_add_3', 'mutated_arg_names': ['in_out_ptr0'], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 4, 'num_reduction': 0, 'backend_hash': 'E747A03CFF7FBD9CD709318F0BAF8DD721C07083AF832499CE986567BEF5D43D', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': False, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False, 'is_hip': True},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_add_3(in_out_ptr0, in_ptr0, in_ptr1, in_ptr2, xnumel, XBLOCK : tl.constexpr):
xnumel = 4
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = xindex < xnumel
x2 = xindex
x0 = (xindex % 2)
x1 = xindex // 2
tmp0 = tl.load(in_ptr0 + (x2), xmask)
tmp3 = tl.load(in_ptr1 + (x2), xmask)
tmp6 = tl.load(in_ptr2 + (x0 + 4*x1), xmask)
tmp8 = tl.load(in_out_ptr0 + (x2), xmask)
tmp1 = 1.0
tmp2 = tmp0 + tmp1
tmp4 = tmp3 + tmp1
tmp5 = tmp2 + tmp4
tmp7 = tmp5 + tmp6
tmp9 = tmp7 + tmp8
tl.store(in_out_ptr0 + (x2), tmp9, xmask)
''', device_str='cuda')
def partition_1(args):
arg0_1, arg1_1, buf2, buf6 = args
args.clear()
assert_size_stride(arg0_1, (2, 2), (2, 1))
assert_size_stride(arg1_1, (2, 2), (2, 1))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf7 = buf6; del buf6 # reuse
# Topologically Sorted Source Nodes: [x1, y1, add_3, add_4, add_5], Original ATen: [aten.add]
stream0 = get_raw_stream(0)
triton_poi_fused_add_3.run(buf7, arg0_1, arg1_1, buf2, 4, grid=grid(4), stream=stream0)
del arg0_1
del arg1_1
del buf2
return (buf7, )
async_compile.wait(globals())
del async_compile
class Runner:
def __init__(self, partitions):
self.partitions = partitions
def recursively_apply_fns(self, fns):
new_callables = []
for fn, c in zip(fns, self.partitions):
new_callables.append(fn(c))
self.partitions = new_callables
def call(self, args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (2, 2), (2, 1))
assert_size_stride(arg1_1, (2, 2), (2, 1))
partition0_args = [arg0_1, arg1_1]
(buf2, buf3) = self.partitions[0](partition0_args)
del partition0_args
buf4 = empty_strided_cpu((2, 2), (2, 1), torch.float32)
buf4.copy_(buf3, False)
buf5 = buf4; del buf4 # reuse
cpp_fused_add_2(buf5)
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf6 = buf3; del buf3 # reuse
buf6.copy_(buf5, False)
del buf5
partition1_args = [arg0_1, arg1_1, buf2, buf6]
del arg0_1, arg1_1, buf2, buf6
(buf7,) = self.partitions[1](partition1_args)
del partition1_args
return (buf7, )
runner = Runner(partitions=[partition_0, partition_1])
call = runner.call
recursively_apply_fns = runner.recursively_apply_fns
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((2, 2), (2, 1), device='cuda:0', dtype=torch.float32)
arg1_1 = rand_strided((2, 2), (2, 1), device='cuda:0', dtype=torch.float32)
fn = lambda: call([arg0_1, arg1_1])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: (buf0, buf1) = self.partitions[0](partition0_args)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor.py TritonCodeGenTests.test_graph_partition
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,909,507,213
|
Support int step for nonfused optimizer
|
zeshengzong
|
open
|
[
"open source",
"release notes: foreach_frontend"
] | 1
|
CONTRIBUTOR
|
Fixes #142378
| true
|
2,909,506,697
|
[scan] Flattened output of HOP scan
|
bohnstingl
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 4
|
COLLABORATOR
|
This is required because downstream operations expect HOPs to return a flattened list of output elements.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ydwu4
| true
|
2,909,436,016
|
Aborted (core dumped) double free or corruption (out)
|
Cookiee235
|
open
|
[
"triaged",
"module: linear algebra"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class DeterministicModel(torch.nn.Module):
def __init__(self):
super(DeterministicModel, self).__init__()
self.linear = torch.nn.Linear(10, 10)
self.linear.weight.data.fill_(1.0)
self.linear.bias.data.fill_(0.0)
def forward(self, x):
x = torch.nn.functional.rrelu(x, lower=0.1, upper=0.2, training=False)
x = self.linear(x)
x = torch.linalg.ldl_solve(torch.eye(10), torch.arange(10), x)
x = torch.fft.fftshift(x, dim=0)
return x
inputs = torch.ones(10, 10)
model = DeterministicModel()
res = model(inputs
```
### **StackTrace**
```
double free or corruption (out)
Aborted (core dumped)
```
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @kshitij12345
| true
|
2,909,381,135
|
[dynamo][invoke_subgraph] Input aliasing and mutation check in Dynamo
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #150082
* #150090
* __->__ #148953
* #150036
* #149667
* #149087
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,909,347,602
|
[Dynamo] index_fill_ raise an assertionError
|
zhejiangxiaomai
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
### 🐛 Describe the bug
```python
import torch
def index_fill_op(inputs, index):
fwd_result = inputs.index_fill_(0, index, 17)
return fwd_result
inputs = torch.randn([2, 2, 2, 2])
inputs = inputs.contiguous(memory_format=torch.channels_last)
index = torch.tensor([1], dtype=torch.long)
index_fill_compile = torch.compile(index_fill_op)
cpu_results = index_fill_compile(inputs, index)
```
ERROR:
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: n=copy_, n.args[0]=permute, placeholders={arg0_1, arg1_1}, graph=graph():
%arg0_1 : [num_users=2] = placeholder[target=arg0_1]
%arg1_1 : [num_users=1] = placeholder[target=arg1_1]
%scalar_tensor : [num_users=1] = call_function[target=torch.ops.aten.scalar_tensor.default](args = (17,), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%expand : [num_users=1] = call_function[target=torch.ops.aten.expand.default](args = (%scalar_tensor, [1, 2, 2, 2]), kwargs = {})
%index_put : [num_users=1] = call_function[target=torch.ops.aten.index_put.default](args = (%arg0_1, [%arg1_1], %expand), kwargs = {})
%clone : [num_users=1] = call_function[target=torch.ops.aten.clone.default](args = (%index_put,), kwargs = {memory_format: torch.contiguous_format})
%empty : [num_users=1] = call_function[target=torch.ops.aten.empty.memory_format](args = ([2, 2, 2, 2],), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%permute : [num_users=1] = call_function[target=torch.ops.aten.permute.default](args = (%empty, [0, 3, 1, 2]), kwargs = {})
%copy_ : [num_users=1] = call_function[target=torch.ops.aten.copy_.default](args = (%permute, %clone), kwargs = {})
%copy__1 : [num_users=1] = call_function[target=torch.ops.aten.copy_.default](args = (%arg0_1, %copy_), kwargs = {})
return (copy__1,)
Call stack:
Traceback (most recent call last):
File "/home/zhenzhao/qnpu/pt_21_195/src/index_fill_.py", line 12, in <module>
cpu_results = index_fill_compile(inputs, index)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 2340, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 153, in aot_dispatch_base
fw_module, updated_flat_args, maybe_subclass_meta = aot_dispatch_base_graph( # type: ignore[misc]
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 184, in aot_dispatch_base_graph
copy_count = assert_functional_graph(fw_module.graph)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/functional_utils.py", line 461, in assert_functional_graph
n.args[0] in placeholders
### Versions
Collecting environment information...
PyTorch version: 2.6.0+hpu.git603340c
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5 (ssh://git@github.com/habana-internal/tpc_llvm10 6423f90703886aa37631daf63eaf24f24df9ba3d)
CMake version: version 3.28.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3]
[pip3]
[conda] Could not collect
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,909,331,956
|
DISABLED test_wrap_kwarg_default_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_wrap_kwarg_default_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38534736647).
Over the past 3 hours, it has been determined flaky in 17 workflow(s) with 34 failures and 17 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_wrap_kwarg_default_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 1667, in test_wrap_kwarg_default
self._test_wrap_simple(f, default_args_generator((x, y)), arg_count)
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 7.
Absolute difference: 3
Relative difference: 0.75
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_wrap_kwarg_default_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,909,303,839
|
[inductor] [fake tensor] `torch.conj` crashes when `add` original complex tensor
|
shaoyuyoung
|
open
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**symptom**: eager can pass the check while inductor throws the error
**device backend**: both CPP and triton
**repro**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
x = x + torch.conj(x)
return x
model = Model()
x = torch.randn(5, 5, dtype=torch.cfloat)
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
output = model(*inputs)
print(f"succeed on {backend}")
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
```
succeed on eager
E0311 14:23:19.998000 2522956 site-packages/torch/_subclasses/fake_tensor.py:2408] [0/0] RuntimeError: torch.Tensor.view is not supported for conjugate view tensors when converting to a different dtype.
RuntimeError: torch.Tensor.view is not supported for conjugate view tensors when converting to a different dtype.
```
### Versions
nightly 20250225
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,909,191,202
|
`torch.distributions.Categorical(logits=...).sample()` returns -9223372036854775808 on `MPS`. Works correctly on `CPU` backend.
|
nipunbatra
|
closed
|
[
"module: mps"
] | 2
|
NONE
|
### 🐛 Describe the bug
```python
import torch
device = 'cpu'
t = torch.tensor([-0.6194, 0.2150, 0.0741, -0.5155, -0.3574, 0.1880, 0.3493, 0.2933,
0.3222, 0.1351, -0.1676, 0.2195, -0.2661, -0.1681, 0.0102, -0.2942,
0.1377, -0.3102, 0.0231, -0.3813, -0.8353, -0.0413, -0.2836, -0.0108,
-0.6760, -0.0350, -0.6092], device=device)
print(torch.distributions.Categorical(logits=t).sample())
device = 'mps'
t = torch.tensor([-0.6194, 0.2150, 0.0741, -0.5155, -0.3574, 0.1880, 0.3493, 0.2933,
0.3222, 0.1351, -0.1676, 0.2195, -0.2661, -0.1681, 0.0102, -0.2942,
0.1377, -0.3102, 0.0231, -0.3813, -0.8353, -0.0413, -0.2836, -0.0108,
-0.6760, -0.0350, -0.6092], device=device)
print(torch.distributions.Categorical(logits=t).sample())
```
Output
```python
tensor(18)
tensor(-9223372036854775808, device='mps:0')
```
The code clearly works correctly on CPU backend but doesn't work correctly on MPS backend.
This value `-9223372036854775808` is apparently the minimum value of `int64` data type.
### Versions
```bash
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.1 (x86_64)
GCC version: Could not collect
Clang version: 13.0.1
CMake version: version 3.23.2
Libc version: N/A
Python version: 3.9.20 | packaged by conda-forge | (main, Sep 30 2024, 17:51:21) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-15.3.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] gpytorch==1.9.1
[pip3] hamiltorch==0.4.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==1.9.3
[pip3] pytorch-metric-learning==1.7.3
[pip3] torch==2.0.0
[pip3] torchaudio==0.12.1
[pip3] torchmetrics==0.8.2
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.15.1
[conda] autopytorch 0.2.1 pypi_0 pypi
[conda] gpytorch 1.9.1 pypi_0 pypi
[conda] hamiltorch 0.4.1 pypi_0 pypi
[conda] mkl 2022.2.1 h44ed08c_16952 conda-forge
[conda] mkl-service 2.4.0 py39h9032bd8_0 conda-forge
[conda] mypy-extensions 0.4.3 pypi_0 pypi
[conda] numpy 1.26.4 py39h28c39a1_0 conda-forge
[conda] pytorch-lightning 1.9.3 pypi_0 pypi
[conda] pytorch-metric-learning 1.7.3 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torchaudio 0.12.1 py39_cpu pytorch
[conda] torchmetrics 0.8.2 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.15.1 pypi_0 pypi
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,909,104,733
|
Enable misc-use-internal-linkage check and apply fixes
|
cyyever
|
closed
|
[
"module: cpu",
"open source",
"better-engineering",
"module: amp (automated mixed precision)",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/xpu"
] | 10
|
COLLABORATOR
|
Enables clang-tidy rule [`misc-use-internal-linkage`](https://clang.llvm.org/extra/clang-tidy/checks/misc/use-internal-linkage.html). This new check was introduced in Clang-Tidy 18 and is available due to recent update of Clang-Tidy 19.
The check marks functions and variables used only in the translation unit as static. Therefore undesired symbols are not leaked into other units, more link time optimisations are possible and the resulting binaries may be smaller.
The detected violations were mostly fixed by using static. In other cases, the symbols were indeed consumed by others files, then their declaring headers were included. Still some declarations were wrong and have been fixed.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @mcarilli @ptrblck @leslie-fang-intel @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,909,094,437
|
WIP heuristic choices part 2
|
exclamaforte
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,909,036,188
|
export deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B failed
|
FlintWangacc
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 4
|
NONE
|
### 🐛 Describe the bug
export deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B failed
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch.export import export
# Load the model and tokenizer
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
class Qwen2(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.qwen = model
def forward(self, x):
result = self.qwen(x)
result.past_key_values = ()
return result
# Set the model to evaluation mode
model.eval()
# Create dummy input for the model
dummy_input = tokenizer("This is a test input.", return_tensors="pt")
exported_program: torch.export.ExportedProgram = export (
Qwen2(), (dummy_input,)
)
```
Error message:
```shell
/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/cuda/__init__.py:734: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
Traceback (most recent call last):
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 784, in proxy_args_kwargs
proxy_args = tuple(arg.as_proxy() for arg in args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 784, in <genexpr>
proxy_args = tuple(arg.as_proxy() for arg in args)
^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/base.py", line 344, in as_proxy
raise NotImplementedError(str(self))
NotImplementedError: MutableMappingVariable(BatchEncoding)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hmsjwzb/work/models/QWEN/./qwen_export_onnx_2.py", line 46, in <module>
exported_program: torch.export.ExportedProgram = export (
^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/__init__.py", line 368, in export
return _export(
^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/_trace.py", line 1970, in _export
return _export_for_training(
^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/_trace.py", line 1035, in wrapper
raise e
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/_trace.py", line 1008, in wrapper
ep = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/exported_program.py", line 128, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/_trace.py", line 1834, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/_trace.py", line 1283, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/export/_trace.py", line 662, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1569, in inner
result_traced = opt_f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 443, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 443, in call_function
return tx.inline_user_function_return(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1736, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 378, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 317, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 118, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 903, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3072, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3198, in inline_call_
tracer.run()
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 659, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2341, in CALL
self._call(inst)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2335, in _call
self.call_function(fn, args, kwargs)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 897, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/variables/nn_module.py", line 420, in call_function
*proxy_args_kwargs(args, kwargs),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 791, in proxy_args_kwargs
unimplemented(
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/exc.py", line 316, in unimplemented
raise Unsupported(msg, case_name=case_name) from from_exc
torch._dynamo.exc.Unsupported: call_function args: MutableMappingVariable(BatchEncoding)
from user code:
File "/home/hmsjwzb/work/models/QWEN/./qwen_export_onnx_2.py", line 16, in forward
result = self.qwen(x)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 856, in forward
outputs = self.model(
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 535, in forward
inputs_embeds = self.embed_tokens(input_ids)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
```shell
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.7 (https://github.com/llvm/llvm-project.git cd708029e0b2869e80abe31ddb175f7c35361f90)
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11+local (heads/3.11-dirty:f0895aa9c1d, Dec 20 2024, 14:17:01) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.14.0
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] magma-cuda121 2.6.1 1 pytorch
```
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @penguinwu
| true
|
2,909,029,007
|
ROCm: Enable tf32 testing on test_nn
|
jagadish-amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/rocm",
"ciflow/rocm-mi300"
] | 6
|
CONTRIBUTOR
|
Add tf32 support for ROCm tests.
test command: python test/test_nn.py -v
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,908,993,816
|
DISABLED test_set_nccl_pg_timeout_backend0 (__main__.ProcessGroupNCCLGroupTest)
|
pytorch-bot[bot]
|
closed
|
[
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 2
|
NONE
|
Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_set_nccl_pg_timeout_backend0&suite=ProcessGroupNCCLGroupTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38526659704).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_set_nccl_pg_timeout_backend0`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 902, in _check_return_codes
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected -6 but got 0.
Absolute difference: 6
Relative difference: 1.0
Expect process 1 exit code to match Process 0 exit code of -6, but got 0
```
</details>
Test file path: `distributed/test_c10d_nccl.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000
| true
|
2,908,993,689
|
DISABLED test_layer_norm_bwd_req_grad (__main__.DistMathOpsTest)
|
pytorch-bot[bot]
|
closed
|
[
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 3
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_layer_norm_bwd_req_grad&suite=DistMathOpsTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38525949394).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_layer_norm_bwd_req_grad`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 899, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 0 terminated or timed out after 300.01660895347595 seconds
```
</details>
Test file path: `distributed/tensor/test_math_ops.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000
| true
|
2,908,932,339
|
Remove test decorations on MacOS 12
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
COLLABORATOR
|
MacOS 12 may reach EOL, as from https://endoflife.date/macos
| true
|
2,908,907,623
|
Remove outdated skipIfRocmVersionLessThan decorations
|
cyyever
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,908,892,516
|
Remove outdated skipCUDAIfCudnnVersionLessThan decoration
|
cyyever
|
closed
|
[
"triaged",
"open source",
"Merged",
"release notes: nn",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Test conditions for CUDNN 7 and 8 were removed because we have moved to CUDNN 9.
| true
|
2,908,886,311
|
Whether the transposed tensor is contiguous affects the results of the subsequent Linear layer.
|
pikerbright
|
open
|
[
"needs reproduction",
"module: nn",
"triaged",
"module: intel"
] | 4
|
NONE
|
### 🐛 Describe the bug
I found that whether the transposed tensor is contiguous affects the results of the subsequent Linear layer. I want to know if it is a bug or not?
```
import torch
from torch import nn
x = torch.randn(3, 4).transpose(0, 1) # 非连续张量(转置后)
linear = nn.Linear(3, 2)
y1 = linear(x) # 非连续输入
y2 = linear(x.contiguous()) # 连续输入
print(torch.allclose(y1, y2)) # True
x = torch.randn(2,226,1024).transpose(0, 1) # 非连续张量(转置后)
linear = nn.Linear(1024, 64)
y1 = linear(x) # 非连续输入
y2 = linear(x.contiguous()) # 连续输入
print(torch.allclose(y1, y2)) # False ???
```
### Versions
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
Nvidia driver version: 570.86.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 44
On-line CPU(s) list: 0-43
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 22
Socket(s): 1
Stepping: 8
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1 MiB (22 instances)
L1i cache: 704 KiB (22 instances)
L2 cache: 44 MiB (22 instances)
L3 cache: 97.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-43
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytorch-fid==0.3.0
[pip3] torch==2.5.0+cu124
[pip3] torch-fidelity==0.3.0
[pip3] torchao==0.9.0
[pip3] torchaudio==2.5.0+cu124
[pip3] torchelastic==0.2.2
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.0+cu124
[pip3] triton==3.1.0
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-fid 0.3.0 pypi_0 pypi
[conda] torch 2.5.0+cu124 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torchao 0.9.0 pypi_0 pypi
[conda] torchaudio 2.5.0+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchmetrics 1.6.1 pypi_0 pypi
[conda] torchvision 0.20.0+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,908,830,771
|
[triton 3.3] `AOTInductorTestABICompatibleGpu.test_triton_kernel_tma_descriptor_1d_dynamic_False_cuda`
|
davidberard98
|
closed
|
[
"module: crash",
"oncall: pt2",
"module: inductor",
"upstream triton",
"oncall: export",
"module: aotinductor",
"module: user triton"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
1. Update triton to `release/3.3.x` https://github.com/triton-lang/triton/tree/release/3.3.x
2. run `python test/inductor/test_aot_inductor.py -vvv -k test_triton_kernel_tma_descriptor_1d_dynamic_False_cuda`
Possibly an easier repro is
```
TORCHINDUCTOR_CPP_WRAPPER=1 python test/inductor/test_triton_kernels.py -k test_tma_descriptor_1d_dynamic_False_backend_inductor
```
errors:
<details>
```
/home/dberard/local/triton-env2/pytorch/torch/backends/cudnn/__init__.py:108: UserWarning: PyTorch was compiled without cuDNN/MIOpen support. To use cuDNN/MIOpen, rebuild PyTorch making sure the library is visible to the build system.
warnings.warn(
/home/dberard/local/triton-env2/pytorch/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /home/dberard/local/triton-env2/pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
W0310 18:58:17.091000 2102274 torch/_export/__init__.py:67] +============================+
W0310 18:58:17.091000 2102274 torch/_export/__init__.py:68] | !!! WARNING !!! |
W0310 18:58:17.092000 2102274 torch/_export/__init__.py:69] +============================+
W0310 18:58:17.092000 2102274 torch/_export/__init__.py:70] torch._export.aot_compile()/torch._export.aot_load() is being deprecated, please switch to directly calling torch._inductor.aoti_compile_and_package(torch.export.export())/torch._inductor.aoti_load_package() instead.
ETEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure
CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
======================================================================
ERROR: test_triton_kernel_tma_descriptor_1d_dynamic_False_cuda (__main__.AOTInductorTestABICompatibleGpu)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 1221, in not_close_error_metas
pair.compare()
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 700, in compare
self._compare_values(actual, expected)
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 830, in _compare_values
compare_fn(
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 1009, in _compare_regular_values_close
matches = torch.isclose(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/home/dberard/local/triton-env2/pytorch/test/inductor/test_torchinductor.py", line 12836, in new_test
return value(self)
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_internal/common_utils.py", line 552, in instantiated_test
test(self, **param_kwargs)
File "/home/dberard/local/triton-env2/pytorch/test/inductor/test_aot_inductor.py", line 2568, in test_triton_kernel_tma_descriptor_1d
self.check_model(
File "/home/dberard/local/triton-env2/pytorch/test/inductor/test_aot_inductor_utils.py", line 207, in check_model
self.assertEqual(actual, expected, atol=atol, rtol=rtol)
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_internal/common_utils.py", line 4052, in assertEqual
error_metas = not_close_error_metas(
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 1228, in not_close_error_metas
f"Comparing\n\n"
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 367, in __repr__
body = [
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 368, in <listcomp>
f" {name}={value!s},"
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor.py", line 590, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor_str.py", line 710, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor_str.py", line 631, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor_str.py", line 363, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor_str.py", line 146, in __init__
tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
To execute this test, run the following from the base repo dir:
python test/inductor/test_aot_inductor.py AOTInductorTestABICompatibleGpu.test_triton_kernel_tma_descriptor_1d_dynamic_False_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1 test in 5.612s
FAILED (errors=1)
inline_call []
unimplemented []
stats [('calls_captured', 2), ('unique_graphs', 1)]
inductor [('extern_calls', 4), ('async_compile_cache_miss', 2), ('benchmarking.InductorBenchmarker.benchmark_gpu', 2), ('pattern_matcher_count', 1), ('pattern_matcher_nodes', 1), ('async_compile_cache_hit', 1)]
graph_break []
aten_mm_info []
```
</details>
errors w/ compute-sanitizer:
https://gist.github.com/davidberard98/ecd9fefff91393b3a3fa0725dea96e22
### Versions
triton: release/3.3.x
pytorch: viable/strict from mar 10
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @bertmaher @int3 @nmacchioni @embg @peterbell10 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi @benjaminglass1 @oulgen
| true
|
2,908,829,486
|
[AOTInductor]Only support one model instance when use AOTIModelPackageLoader load aot model?
|
zzq96
|
closed
|
[
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 3
|
NONE
|
when i use aoti model in cpp, i try to infer parallel by using multi threads and multi streams, like this:
```cpp
torch::inductor::AOTIModelPackageLoader loader("model.pt2");
torch::inductor::AOTIModelContainerRunner* runner = loader.get_runner();
for thread_id in threads:
// in different threads
auto outputs = runner->run(inputs, streams[thread_id]);
```
But i found the others are blocked when one of infer is running.
than i found in torch code, only one model instance is initialized when use AOTIModelPackageLoader:
```cpp
std::string cubin_dir = temp_dir_ + k_separator + model_directory;
runner_ = registered_aoti_runner[device](
so_path, 1, device, cubin_dir, run_single_threaded); // here, only init 1 model
```
is it a feature or bug? how can i infer parallel?
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
| true
|
2,908,792,044
|
Split up cub-RadixSortPairs.cu to parallelize compilation
|
TovlyFB
|
closed
|
[
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: cuda",
"ci-no-td",
"no-runner-experiments"
] | 31
|
CONTRIBUTOR
|
Summary: `cub-RadixSortPairs.cu` has slow compilation times, especially on Windows. These changes split up the file into smaller components to allow each component to compile in parallel. On Windows, I observed a compile time drop from about 20 minutes to 6 minutes.
Differential Revision: D70539649
| true
|
2,908,756,112
|
DISABLED test_side_effect_local_list_append_no_graph_break_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
pytorch-bot[bot]
|
closed
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 5
|
NONE
|
Platforms: linux, rocm, slow, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_side_effect_local_list_append_no_graph_break_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38522534619).
Over the past 3 hours, it has been determined flaky in 10 workflow(s) with 20 failures and 10 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_side_effect_local_list_append_no_graph_break_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,908,756,109
|
DISABLED test_fsdp_tp_integration (__main__.TestTPFSDPIntegration)
|
pytorch-bot[bot]
|
closed
|
[
"oncall: distributed",
"module: flaky-tests",
"skipped"
] | 2
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_fsdp_tp_integration&suite=TestTPFSDPIntegration&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38521925944).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_fsdp_tp_integration`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 899, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 0 terminated or timed out after 305.0242257118225 seconds
```
</details>
Test file path: `distributed/fsdp/test_fsdp_tp_integration.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000
| true
|
2,908,743,468
|
[inductor] Fix create_specialize_impl error in latest Triton
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #148933
```py
$ python test/inductor/test_triton_kernels.py KernelTests.test_triton_kernel_2d_autotune_grad_False_dynamic_True_backend_inductor_grid_type_1
WARNING:torch._dynamo:Encountered an exception in identify_mutated_tensors, assuming every input is mutated
Traceback (most recent call last):
File "/home/jansel/pytorch/torch/_higher_order_ops/triton_kernel_wrap.py", line 715, in identify_mutated_tensors
ttir_module, ordered_tensor_names = generate_ttir(kernel, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_higher_order_ops/triton_kernel_wrap.py", line 289, in generate_ttir
specialization = _get_specialization(ordered_args.values())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jansel/pytorch/torch/_higher_order_ops/triton_kernel_wrap.py", line 262, in _get_specialization
specialize_impl = triton.runtime.jit.create_specialize_impl()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: create_specialize_impl() missing 1 required positional argument: 'specialize_extra'
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.