id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,880,282,003
|
[test][do not merge]Upgrade oneDNN to v3.7(11)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,281,292
|
[test][do not merge]Upgrade oneDNN to v3.7 (10)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,280,519
|
[test][do not merge]Upgrade oneDNN to v3.7 (9)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,279,748
|
[test][do not merge]Upgrade oneDNN to v3.7 (8)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,278,648
|
[test][do not merge] Upgrade oneDNN to v3.7 (7)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,277,849
|
test 0-dim squeeze in basic.TestSqueeze
|
redwrasse
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
CONTRIBUTOR
|
Replace TODO with 0-dim squeeze, checks scalar is unchanged in `basic.TestSqueeze`
| true
|
2,880,268,097
|
Custom ops support arbitrary input types by migrating to python dispatcher
|
yanboliang
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Test case:
```
@torch.library.custom_op("mylib::foo", mutates_args=())
def foo(d: dict, t: torch.Tensor) -> torch.Tensor:
return torch.sin(d["x"] - d["y"] + t)
@foo.register_fake
def _(d: dict, t: torch.Tensor) -> torch.Tensor:
return torch.empty_like(d["x"])
d = {"x": torch.randn(2, 3, requires_grad=True), "y": torch.randn(2, 3, requires_grad=True)}
t = torch.randn(2, 3, requires_grad=True)
@torch.compile(backend="eager", fullgraph=True)
def fn(d, t):
return torch.sin(torch.ops.mylib.foo.default(d, t) + 1.5)
y = fn(d, t)
print(y)
y.sum().backward()
print(d["x"].grad)
print(d["y"].grad)
print(t.grad)
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,880,231,009
|
[Intel GPU] Decompule Intel GPU oneDNN from other backends
|
ZhiweiYan-96
|
closed
|
[
"triaged",
"module: mkldnn",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 6
|
COLLABORATOR
|
# Motivation
Currently, Intel GPU is moving forward rapidly with the development of feature. We(Intel GPU) want an independent version control over oneDNN component so as to quickly adopt the optimization or bug fixing provided by oneDNN team.
This PR does not change the behaviors of other backends like Intel CPU, ARM. They can keep using the stable version contained in `third_party/ideep`.
# Detail
At compilation time, we will `git clone` oneDNN via URL `https://github.com/oneapi-src/oneDNN` and checkout to the tag/commit that Intel GPU backend prefers. This feature is supported by CMake `Externalproject_add` command.
Following is a build log example:
```bash
[11/60] Performing download step (git clone) for 'xpu_mkldnn_proj'
Cloning into 'xpu_mkldnn_proj'...
HEAD is now at 5e92240360 meta: updated citation file
[12/60] Performing update step for 'xpu_mkldnn_proj'
-- Already at requested tag: v3.7
[13/60] No patch step for 'xpu_mkldnn_proj'
```
The log demonstates that, we explicitly download the source files and checkout to a specific tag. The source file of oneDNN is located at `build/xpu_mkldnn_proj-prefix/src/xpu_mkldnn_proj`
# Runtime verification
Running UT for CPU
```bash
onednn_verbose,v1,info,oneDNN v3.7.0 (commit fc3f17ad469b8a6da7192ae12d32625faa509f1e)
onednn_verbose,v1,info,cpu,runtime:OpenMP,nthr:24
onednn_verbose,v1,info,cpu,isa:Intel AVX-512 with Intel DL Boost
onednn_verbose,v1,info,gpu,runtime:none
onednn_verbose,v1,info,graph,backend,0:dnnl_backend
onednn_verbose,v1,primitive,info,template:operation,engine
```
Runnint UT for Intel GPU
```bash
onednn_verbose,v1,info,oneDNN v3.7.0 (commit 5e9224036021433d2577548ed0539fe9a53256bc)
onednn_verbose,v1,info,cpu,runtime:threadpool,nthr:24
onednn_verbose,v1,info,cpu,isa:Intel AVX-512 with Intel DL Boost
onednn_verbose,v1,info,gpu,runtime:DPC++
onednn_verbose,v1,info,gpu,engine,sycl gpu device count:2
```
We can see that, Intel GPU would uses commit `5e922` (tag v3.7), while CPU uses `fc3f17`
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147926
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,880,134,335
|
Fix auto_functionalize x inference_mode
|
zou3519
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: composability",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147925
Fixes #147924
We were using the wrong FunctionalTensorMode to construct
FunctionalTensors. FunctionalTensors modify the FunctionalTensorMode on
construction, so that led to the wrong FunctionalTensorMode being
modified. This PR threads the FunctionalTensorMode through correctly.
Test Plan:
- new test
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,087,911
|
[functionalization] inference_mode_base wrong with auto_functionalization
|
zou3519
|
closed
|
[] | 0
|
CONTRIBUTOR
|
auto_functionalization creates FunctionalTensor whose modes are fresh modes: https://github.com/pytorch/pytorch/blob/f211818bc0d1c8de39c1ef8071c4ff865989e40b/torch/_subclasses/functional_tensor.py#L463-L465
However, constructing a FunctionalTensor mutates the mode object (https://github.com/pytorch/pytorch/blob/f211818bc0d1c8de39c1ef8071c4ff865989e40b/torch/_subclasses/functional_tensor.py#L151-L160). In the case of auto_functionalization it ends up mutating the wrong mode.
Fix is simple, we need to thread the mode through to do_auto_functionalize
| true
|
2,880,084,940
|
[inductor] Add logs for precompile and autotuning
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Differential Revision: D70222645
I want to add more logs around precompile, especially around the reason why sometimes it gets fast returned. See https://github.com/pytorch/pytorch/pull/147590
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,029,343
|
[cutlass backend] turn autotuning logs off by default + rename log to autotuning log
|
henrylhtsang
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147922
things we did:
* turn off autotuning logs by default
* rename autotuning logs from log to autotuning_log, so people are aware that it is a special artifact log.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,880,022,894
|
Adam doesn't work with nonzero-dim Tensor betas
|
Tony-Y
|
open
|
[
"module: optimizer",
"triaged"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
This bug was pointed out at https://github.com/pytorch/pytorch/issues/145461#issuecomment-2612287681. The PR #145674 fixed the Tensor `lr` issue, but not the Tensor `betas` issue.
### Versions
The same as #145461
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,880,022,885
|
Remove binaries/benchmark_args.h
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
It's not used in OSS.
| true
|
2,880,011,647
|
[inductor][cpu]AOT inductor AMP static shape default wrapper occupied almost 3x disk than before
|
zxd1997066
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Take resnet50 as example,
the bad commit: 0e1675a89bcc00c3615048947b5ef6c0355765d3
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench resnet50 amp first static default 0 aot_inductor
Testing with aot_inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval resnet50
running benchmark: 100%|███████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 12.88it/s]
4.801x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,resnet50,32,4.800948,13.025713,28.911735,0.962134,239.280538,248.697651,0,0,0,0,0,0,0
/workspace/pytorch# cd /tmp/
/tmp# du -d 1 -h
294M ./torchinductor_root
295M .
```
the last good commit: 768d73f6929be2a6eb81fe7424416dceb4a4aca9
```
bash inductor_single_run.sh multiple inference performance torchbench resnet50 amp first static default 0 aot_inductor
Testing with aot_inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval resnet50
running benchmark: 100%|███████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 12.98it/s]
4.797x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,resnet50,32,4.797237,12.917480,27.858889,0.962642,239.691776,248.993792,0,0,0,0,0,0,0
/workspace/pytorch# cd /tmp
/tmp# du -d 1 -h
100M ./torchinductor_root
100M .
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>373ffb19</td>
<td>main</td>
<td>766a5e3a</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>bea72180ed75f522ce4fe5e723bc2112e0874732</td>
<td>main</td>
<td>f2d6cfa6775</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+c670ad8</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>f2d6cfa6775601df5a038f7a4d0b37da75a53ed9</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench resnet50 amp first static default 0 aot_inductor
Suspected guilty commit: 0e1675a89bcc00c3615048947b5ef6c0355765d3
cc @chauhang @penguinwu @chuanqi129 @leslie-fang-intel @chunyuan-w
| true
|
2,879,999,477
|
[FlexAttention] Fix IMA bug
|
drisspg
|
closed
|
[
"high priority",
"module: nn",
"Merged",
"ciflow/trunk",
"release notes: nn",
"bug",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147918
# Summary
Fixes: https://github.com/pytorch/pytorch/issues/147268
I got this right for the backwards and somehow forgot to do the flip in the forward, not sure how this wasnt found earlier..
Testing IMAs is tuff in pytest so didnt add but verified on reproducer
```py
❯ sanitize python flex/maurice_ima.py --setting 0
========= COMPUTE-SANITIZER
pool: torch.Size([64, 8, 784, 64]) tensor(1.0078, device='cuda:0')
Feat shape torch.Size([64, 8, 784, 64])
Feat strides (401408, 50176, 64, 1)
Feat is contig: True
attn: torch.Size([64, 8, 784, 64]) tensor(1.7994, device='cuda:0')
========= ERROR SUMMARY: 0 errors
❯ sanitize python flex/maurice_ima.py --setting 1
========= COMPUTE-SANITIZER
pool: torch.Size([64, 8, 784, 64]) tensor(2.8297, device='cuda:0')
Feat shape torch.Size([64, 8, 784, 64])
Feat strides (401408, 50176, 64, 1)
Feat is contig: True
attn: torch.Size([64, 8, 784, 64]) tensor(1.9714, device='cuda:0')
========= ERROR SUMMARY: 0 errors
❯ sanitize python flex/maurice_ima.py --setting 2
========= COMPUTE-SANITIZER
pool: torch.Size([64, 8, 784, 64]) tensor(3.2232, device='cuda:0')
Feat shape torch.Size([64, 8, 784, 64])
Feat strides (401408, 50176, 64, 1)
Feat is contig: True
attn: torch.Size([64, 8, 784, 64]) tensor(2.2095, device='cuda:0')
========= ERROR SUMMARY: 0 errors
````
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,879,961,700
|
[Don't merge]Upgrade submodule oneDNN to v3.7 (#147498)(Zi)
|
xuhancn
|
open
|
[
"module: mkldnn",
"open source",
"Stale",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td",
"ciflow/linux-aarch64"
] | 5
|
COLLABORATOR
|
This PR is to upgrade submodule oneDNN to v3.7.
## Improvements
- Improved performance of convolution and matmul primitives on Intel Xeon processors with Intel AMX instruction set support (formerly Sapphire Rapids and Granite Rapids).
- Improved performance of int8 and fp32 forward convolution primitive on processors with Intel AVX2 instruction set support.
- Improved performance of fp8 matmul primitives with bf16 and fp16 bias data type on Intel Xeon processors with Intel AMX instruction set support (formerly Sapphire Rapids and Granite Rapids).
- Introduced initial optimizations for Intel GPUs based on Xe3 architecture.
- Added bfloat16 support for SDPA, implemented fp16 and bf16 gemm kernel in SDPA.
- Fixed f16 matmul accuracy, the issue of SDPA cannot dispatched to ukernel, bf16/fp16/fp32 conv performance, INT8 Kernel trigger page fault, deconvolution precision issue on complex128 and fp64 and gemm correctness issue in float16 issues.
- Improved bf16 matmul performance with fp32 destination with Arm Compute Library (ACL).
- Improved bf16 to fp32 reorder performance.
- Improved bf16 reorder performance.
- Improved bf16 convolution with ACL.
Fixes https://github.com/pytorch/pytorch/issues/136348.
## Validation results on CPU
1. NLP models accuracy/inference/training 

2. Torchbench cpu userbenchmark inference & training

3. Inductor quantization

4. Dynamo benchmarks        
## Validation results on XPU
Accuracy is same as baseline. Performance is shown below. 
## Validation results on ARM
 
Pull Request resolved: https://github.com/pytorch/pytorch/pull/147498
Approved by: https://github.com/fadara01, https://github.com/mingfeima, https://github.com/atalman
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,879,934,245
|
[Draft] Enable cpu_offload for _distribute_state_dict
|
mori360
|
open
|
[
"oncall: distributed",
"Stale",
"release notes: distributed (checkpoint)"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,879,930,417
|
[aot] reset aot counter on torch._dynamo.reset
|
xmfan
|
open
|
[
"Stale",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 2
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147915
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,916,580
|
[MPS] Introduce a shader for `entr()`.
|
dcci
|
closed
|
[
"Merged",
"topic: not user facing",
"module: mps",
"ciflow/mps",
"module: inductor"
] | 4
|
MEMBER
|
To be used in eager/inductor in order to implement the missing operation.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,879,899,362
|
[dynamo] Replace `unimplemented` with `unimplemented_v2`
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: compile ux"
] | 0
|
MEMBER
|
Tracking issue convert all `unimplemented` calls to `unimplemented_v2`.
List of files that need conversion (tag yourself/comment to claim):
- [x] torch/_dynamo/codegen.py @zeshengzong
- [x] torch/_dynamo/variables/base.py @shink
- [x] torch/_dynamo/variables/builder.py @williamwen42 https://github.com/pytorch/pytorch/pull/151044
- [x] torch/_dynamo/variables/builtin.py @williamwen42 https://github.com/pytorch/pytorch/pull/151145
- [x] torch/_dynamo/variables/constant.py @FFFrog
- [x] torch/_dynamo/variables/ctx_manager.py @zou3519
- [ ] torch/_dynamo/variables/dicts.py @anijain2305
- [x] torch/_dynamo/variables/distributed.py @yanboliang #148500
- [x] torch/_dynamo/variables/functions.py (@StrongerXi) https://github.com/pytorch/pytorch/pull/151277
- [ ] torch/_dynamo/variables/higher_order_ops.py @zou3519
- [ ] torch/_dynamo/variables/iter.py @shink https://github.com/pytorch/pytorch/pull/151789
- [x] torch/_dynamo/variables/lists.py @shink https://github.com/pytorch/pytorch/pull/151873
- [ ] torch/_dynamo/variables/misc.py @shink https://github.com/pytorch/pytorch/pull/152274
- [x] torch/_dynamo/variables/nn_module.py @shink https://github.com/pytorch/pytorch/pull/151895
- [ ] torch/_dynamo/variables/script_object.py @zou3519
- [ ] torch/_dynamo/variables/tensor.py
- [x] torch/_dynamo/variables/torch_function.py (@StrongerXi) https://github.com/pytorch/pytorch/pull/151278
- [ ] torch/_dynamo/variables/torch.py
- [ ] torch/_dynamo/variables/user_defined.py (@anijain2305 )
No need to add unittests to test/dynamo/test_graph_break_messages.py unless you think a graph break is significant.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,879,881,072
|
[dynamo] update data-dependent branching graph break messages
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compile ux"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147912
* #147872
* #147494
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,865,679
|
DISABLED test_inductor_reduce_scatter_tensor_single (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d",
"oncall: pt2"
] | 16
|
NONE
|
Platforms: inductor, rocm, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_reduce_scatter_tensor_single&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37814537658).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_reduce_scatter_tensor_single`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/distributed/test_c10d_functional_native.py", line 706, in setUp
dist.init_process_group(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 95, in wrapper
func_return = func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1638, in init_process_group
raise ValueError("trying to initialize the default process group twice!")
ValueError: trying to initialize the default process group twice!
```
</details>
Test file path: `distributed/test_c10d_functional_native.py`
cc @clee2000 @wdvr @chauhang @penguinwu
| true
|
2,879,860,303
|
[DONOTLAND] Fix partial + scalar issue
|
wz337
|
open
|
[
"oncall: distributed",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"module: dtensor"
] | 3
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,879,859,281
|
Exporting onnx model to a buffer causes "TypeError: expected str, bytes or os.PathLike object, not BytesIO"
|
liqunfu
|
closed
|
[
"module: onnx",
"triaged"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
torch.onnx.export cannot take io buffer as input when external_data is True. The repo code with some modification is from https://github.com/Project-MONAI/MONAI/blob/a09c1f08461cec3d2131fde3939ef38c3c4ad5fc/monai/networks/utils.py#L692.
when running this code:
```python
f = io.BytesIO()
torch.onnx.export(
mode_to_export,
onnx_inputs,
f=f,
input_names=input_names,
output_names=output_names or None,
dynamic_axes=dynamic_axes,
opset_version=opset_version,
do_constant_folding=do_constant_folding,
# dynamo=False,
dynamo=True,
external_data=True,
**torch_versioned_kwargs,
)
```
it got:
======================================================================
ERROR: test_unet_1_cpu (__main__.TestConvertToOnnx)
----------------------------------------------------------------------
Traceback (most recent call last):
File "c:\Anaconda3\envs\monai\lib\site-packages\parameterized\parameterized.py", line 620, in standalone_func
return func(*(a + p.args), **p.kwargs, **kw)
File "C:/LiqunWA/MONAI/tests/networks/test_convert_to_onnx.py", line 55, in test_unet
onnx_model = convert_to_onnx(
File "c:\liqunwa\monai\monai\networks\utils.py", line 694, in convert_to_onnx
torch.onnx.export(
File "c:\Anaconda3\envs\monai\lib\site-packages\torch\onnx\__init__.py", line 364, in export
return _compat.export_compat(
File "c:\Anaconda3\envs\monai\lib\site-packages\torch\onnx\_internal\exporter\_compat.py", line 186, in export_compat
onnx_program.save(
File "c:\Anaconda3\envs\monai\lib\site-packages\torch\onnx\_internal\exporter\_onnx_program.py", line 182, in save
onnxscript_apis.save_model_with_external_data(self.model, destination)
File "c:\Anaconda3\envs\monai\lib\site-packages\onnxscript\_framework_apis\torch_2_5.py", line 76, in save_model_with_external_data
destination_path = pathlib.Path(model_path)
File "c:\Anaconda3\envs\monai\lib\pathlib.py", line 960, in __new__
self = cls._from_parts(args)
File "c:\Anaconda3\envs\monai\lib\pathlib.py", line 594, in _from_parts
drv, root, parts = self._parse_args(args)
File "c:\Anaconda3\envs\monai\lib\pathlib.py", line 578, in _parse_args
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not BytesIO
----------------------------------------------------------------------
Ran 5 tests in 74.567s
FAILED (errors=3)
### Versions
(monai) c:\LiqunWA\MONAI>python collect_env.py
Collecting environment information...
PyTorch version: 2.7.0.dev20250224+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise (10.0.22631 64-bit)
GCC version: Could not collect
Clang version: 18.1.8
CMake version: version 3.28.3
Libc version: N/A
Python version: 3.10.16 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:19:12) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD EPYC 7763 64-Core Processor
Manufacturer: AuthenticAMD
Family: 2
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2445
MaxClockSpeed: 2445
L2CacheSize: 4096
L2CacheSpeed: None
Revision: 257
Versions of relevant libraries:
[pip3] flake8==7.1.2
[pip3] flake8-bugbear==24.2.6
[pip3] flake8-comprehensions==3.16.0
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnx_graphsurgeon==0.5.5
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.2.0
[pip3] pytorch-ignite==0.4.11
[pip3] torch==2.7.0.dev20250224+cpu
[pip3] torchaudio==2.6.0.dev20250224+cpu
[pip3] torchio==0.20.4
[pip3] torchvision==0.22.0.dev20250224+cpu
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-ignite 0.4.11 pypi_0 pypi
[conda] torch 2.7.0.dev20250224+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250224+cpu pypi_0 pypi
[conda] torchio 0.20.4 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250224+cpu pypi_0 pypi
| true
|
2,879,832,853
|
[PT2][Optimus][Opportunity Finder][1/n] Add opportunity finder in the inductor for GEMM horizonal fusion search
|
mengluy0125
|
open
|
[
"fb-exported",
"Stale",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 6
|
CONTRIBUTOR
|
Summary: As titled
Test Plan:
### How to enable
Patch the following config
```
torch._inductor.config.optimus_opportunity_finder = True
```
### local reproduce
```
buck2 run mode/opt aps_models/ads/ecosystem/tooling/tools/efficient_module_suite/benchmark:omnifm_perf_benchmark -- benchmark-with-prod-model --prod_config mast_omnifm_v1-5_mwb --prod_config_override prod_config_override_jointarch --batch_size 8 --enable_pt2 True
```
| Metric | Value |
|:-------------------|:------------|
| Batch size | 8 |
| GPU type | H100 |
| Latency | 156.54 ms |
| Model size | 15999.01 MB |
| Flops | 672.93 G |
| Flops/example | 84.12 G |
| TFLOPS/sec | 4.30 |
| MFU | 0.54% |
| Activation/example | 2096.66 MB |
| CPU time total | 364.53 ms |
| GPU time total | 150.01 ms |
Trace link: https://our.intern.facebook.com/intern/perfdoctor/trace_view?filepath=tree/traces/efficient_module_suite/omnifm.Feb_25_14_08_21_trace.json.gz&bucket=pyper_traces
snapshot link: https://www.internalfb.com/manifold/explorer/ai_efficiency/tree/gpu_snapshot/omnifm.Feb_25_14_08_21.snapshot.pickle
P1740638925
Differential Revision: D70205693
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,879,823,914
|
Increase reference count of state tensor in `THPGenerator_reduce` to avoid premature garbage collection in `multiprocessing` start method `"forkserver"` and `"spawn"`
|
ringohoffman
|
open
|
[
"triaged",
"module: random",
"open source",
"release notes: cpp"
] | 5
|
CONTRIBUTOR
|
Fixes #146828
For this script:
```python
from __future__ import annotations
import time
import torch
def worker(generator: torch.Generator):
print(generator.get_state())
if __name__ == '__main__':
torch.multiprocessing.set_start_method("forkserver") # or "spawn"
generator = torch.Generator("cpu")
process = torch.multiprocessing.Process(target=worker, args=(generator,))
process.start() # process.run() does not cause a crash
for i in range(10):
print("Main", i)
time.sleep(1)
process.join()
process.close()
```
When I add ~~`Py_INCREF(ret)`~~ `Py_INCREF(state_tensor)`, I stop getting:
```console
$ python a.py
Main 0
Main 1
Traceback (most recent call last):
File "/home/matthew/.conda/envs/torch39/lib/python3.9/multiprocessing/forkserver.py", line 274, in main
code = _serve_one(child_r, fds,
File "/home/matthew/.conda/envs/torch39/lib/python3.9/multiprocessing/forkserver.py", line 313, in _serve_one
code = spawn._main(child_r, parent_sentinel)
File "/home/matthew/.conda/envs/torch39/lib/python3.9/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/home/matthew/pytorch/torch/multiprocessing/reductions.py", line 546, in rebuild_storage_fd
storage = cls._new_shared_fd_cpu(fd, size)
RuntimeError: unable to resize file <filename not specified> to the right size: Invalid argument (22)
Main 2
Main 3
Main 4
Main 5
Main 6
Main 7
Main 8
Main 9
```
And I start getting:
```console
$ python a.py
Main 0
Main 1
tensor([ 1, 209, 156, ..., 0, 0, 0], dtype=torch.uint8)
Main 2
Main 3
Main 4
Main 5
Main 6
Main 7
Main 8
Main 9
```
cc @pbelevich
| true
|
2,879,805,468
|
scriptfunction: Make sure we have valid __name__ and __qualname__
|
c00w
|
closed
|
[
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: jit",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147906
* #147894
It's not fully clear why these are not being created, but you can definitely
reproduce this in code. `__name__` is fun, since there appears to be no way to
explicitly set it on the pybind11 layer or c++ layer. I've set this in the
python wrapper code (which works correctly). But let me know if people feel
strongly and want us to go explicitly cast to python within the cpp functions
and set it there.
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,879,797,609
|
[BE][EZ] Delete MacOS-12.3 xfail list
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147893
* __->__ #147905
* #147892
As PyTorch requires at least MacOS-13 (and Metal-3) to work, delete any pre-MacoS13 checks from test script
| true
|
2,879,787,709
|
[ROCm] Enable mi300-specific workflows to be triggered on PRs
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm",
"ciflow/inductor-rocm",
"ciflow/rocm-mi300",
"ciflow/inductor-perf-test-nightly-rocm"
] | 8
|
COLLABORATOR
|
This change will be needed to be able to trigger the MI300-specific CI workflows on PRs by using a PR label.
* inductor-rocm-mi300.yml uses the existing `ciflow/inductor-rocm` label so that any PR manually labeled as such will trigger `inductor` config runs on both MI200 and MI300.
* rocm-mi300.yml uses a separate `ciflow/rocm-mi300` label, since we don't want to over-trigger `default` config runs on MI300 runners due to limited capacity, and [`ciflow/rocm` label is automatically applied](https://github.com/pytorch/test-infra/blob/79438512a0632583899938d3b0277da78f5569e0/torchci/lib/bot/autoLabelBot.ts#L24) on many PRs.
* inductor-perf-test-nightly-rocm.yml uses a separate `ciflow/inductor-perf-test-nightly-rocm` label, so that we can manually trigger a round of perf testing on MI300 runners to test the perf impact of a major inductor-related change.
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,879,782,338
|
Remerge of #144974
|
wdvr
|
open
|
[
"Stale",
"release notes: cuda",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Had to be reverted due to an older PR that needed to be backed out.
This is the re-merge PR for #144974. branch was deleted - need to recreate the PR
@lw feel free to approve / merge this one, or fix up your original one if you can restore gh/lw/5/base
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,768,179
|
[CP] Use TorchFunctionMode to dispatch SDPA for CP
|
fegin
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147902
While we prefer not use monkey patching to dispatch SDPA, TorchFunctionMode is currently not compatible with selective activation checkpointing (https://github.com/pytorch/pytorch/issues/147995). This PR adds `TorchFunctionMode` to CP code and make it configurable.
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,879,750,862
|
[cutlass backend] force_disable_caches for test_number_mm_precompiles
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: Test is flaky right now.
Differential Revision: D70209511
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,879,742,513
|
[ROCm][TunableOp] Remove extra transpose characters in hipBLASLt signature.
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
Cleanup the TunableOp hipBLASLt signature of extra transpose characters.
Test manually and no new regressions found.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,879,737,011
|
Change persistent reduction threshold to 32
|
PaulZhang12
|
open
|
[
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Summary:
Increasing threshold for inductor multikernel flag from 16->32 can lead to significant performance gain. This change is safe as TORCHINDUCTOR_MULTI_KERNEL is disabled by defaul
Example benchmark:
````
import torch
import torch.nn.functional as F
from triton.testing import do_bench
from torch._inductor import config as inductor_config
import math
def position_bias_softmax(scores, weight=None, pw_bias=False):
scores = scores.to(torch.float32)
context_position = torch.arange(2048, dtype=torch.long, device="cuda")[:, None]
memory_position = torch.arange(2048, dtype=torch.long, device="cuda")[None, :]
relative_position = memory_position - context_position # shape (query_length, key_length)
relative_buckets = 0
num_buckets=32
max_distance=128
relative_position = -torch.min(relative_position, torch.zeros_like(relative_position))
max_exact = num_buckets // 2
is_small = relative_position < max_exact
relative_position_if_large = max_exact + (
torch.log(relative_position.float() / max_exact)
/ math.log(max_distance / max_exact)
* (num_buckets - max_exact)
).to(torch.long)
relative_position_if_large = torch.min(
relative_position_if_large, torch.full_like(relative_position_if_large, num_buckets - 1)
)
relative_buckets += torch.where(is_small, relative_position, relative_position_if_large)
values = F.embedding(relative_buckets, weight)
values = values.permute([2, 0, 1]).unsqueeze(0)
scores = scores + values
return F.softmax(scores, dim=-1).to(torch.float16)
scores = torch.randn(8, 2048, 2048, device="cuda", dtype=torch.float16)
weight = torch.randn(32, 1, device="cuda")
position_bias_softmax(scores, weight)
compiled = torch.compile(position_bias_softmax)
compiled(scores, weight=weight)
gb = 2 * scores.element_size() * scores.numel() / 1e9
sec = do_bench(lambda: compiled(scores, weight=weight)) / 1e3
print(f"weighted bias gb/s: {gb/sec}")
````
With this change: gb/s: 987.0799446648006
Baseline: gb/s: 693.3391918370983
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @shunting314 @eellison
| true
|
2,879,729,786
|
[PT2] Allow tensor type in allowed_getattr_types_for_subgm when verifiying ep
|
adeaa0332
|
open
|
[
"fb-exported",
"release notes: export"
] | 10
|
NONE
|
Summary:
Noticed this when converting a graph with the following format
EP(
non_lowerable_part: (....)
AOTI_HOP(non_lowerable_inputs)
)
You will get the following error
```
raise SpecViolationError(
torch._export.verifier.SpecViolationError: Invalid get_attr type <class 'torch.Tensor'>.
Valid get_attr types: (<class 'torch.fx.graph_module.GraphModule'>, <class 'torch.nn.parameter.Parameter'>)
```
The non lowerable part has a tensor type in the sub gm for get_attr
Test Plan: Sandcastle
Differential Revision: D70206758
| true
|
2,879,717,230
|
[targets2buck] Remove tombstone messages proactively
|
bigfootjon
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
MEMBER
|
Summary:
X-link: https://github.com/pytorch/executorch/pull/8703
Originally we created a bunch of empty `TARGETS` files to allow us to enable `BUCK` files in fbcode by hiding the existing BUCK file. These files were subsequently merged together using `non_fbcode_target` so these tombstones are no longer necessary.
This diff fixes all files that WOULD have had the useless tombstone merged into them. To create this diff, I just ran the merger script that Codemod Service is using and then deleted the "merged from" and tombstone lines with `sed`, `arc f` and reverted any lines that didn't make sense
Test Plan: CI
Differential Revision: D69994481
| true
|
2,879,712,983
|
[ONNX] slice complex tensor needs implementation
|
MilesV64
|
closed
|
[
"module: onnx",
"triaged"
] | 0
|
NONE
|
🐛 Describe the bug
Torch 2.6.0 shows an error with slice calls to complex tensors.
```
<class 'torch.onnx._internal.exporter._errors.DispatchError'>: No ONNX function found for <OpOverload(op='aten.slice', overload='Tensor')>. Failure message: No decompositions registered for the complex-valued input
⬆️
<class 'torch.onnx._internal.exporter._errors.ConversionError'>: Error when translating node %slice_1 : [num_users=1] = call_function[target=torch.ops.aten.slice.Tensor](args = (%_to_copy, 0, 0, 9223372036854775807), kwargs = {}). See the stack trace for more information.
```
Full reproduction code:
```python
import torch
class ComplexSliceModel(torch.nn.Module):
def forward(self, x):
# Convert input to a complex tensor
x_complex = x.to(torch.complex64)
# Apply a slice operation on the complex tensor
return x_complex[:, :2]
model = ComplexSliceModel()
dummy_input = torch.randn(3, 4)
# Verify the model works as expected
print("Model output:", model(dummy_input))
# This call fails due to the slice op on a complex tensor.
torch.onnx.export(model, dummy_input, "complex_slice.onnx", dynamo=True)
```
**Versions**
```
PyTorch version: 2.6.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.9 | packaged by conda-forge | (main, Feb 14 2025, 07:56:32) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-15.3.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Pro
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20250121
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
```
| true
|
2,879,712,979
|
[Inductor-CPU] Fix broken int8 WoQ GEMM AMX implementation in main
|
sanchitintel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 15
|
COLLABORATOR
|
#146843 broke int8 WoQ GEMM's (for BF16 activation) AMX ISA implementation in the main branch.
UT: `python test/inductor/test_cpu_select_algorithm.py -v -k woq`
The issue remained undetected because in case of templated kernel compilation failure, the auto-tuning infra marks its runtime as `inf`, and the op against which it was being benchmarked is used, so UTs didn't fail even on machines that support AMX ISA.
`test/inductor/test_cpu_select_algorithm.py` UTs checked the value of the `select_algorithm_autotune` counter, which only counts how many ops were selected for autotuning against their templated codegened counterparts.
@leslie-fang-intel advised using a new counter. I added `counters["inductor"]["cpp_templated_kernel_counter"]`, which is incremented after a codegened kernel's compilation, so it'd help catch breakage scenarios in which a templated kernel could not be codegened due to a compilation failure.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,879,702,149
|
Don't crash when we call __qualname__ on torch._C.ScriptFunction
|
c00w
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 16
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147906
* __->__ #147894
We've root caused this to correctly throwing attribute error on ScriptFunction
when missing attributes are caused. This PR will fix crashes that are showing
up. I'm going to stack a second PR to fix torch._c.ScriptFunction just being a
very badly behaving python object (which should also fix this
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,700,327
|
[BE] Switch `TestConsistency` to MPS device
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147893
Which will eventually allow move decorators away more `common_mps.py`
Adjust tolerances accordingly. XFAIL a bunch of tests on MacOS-13, which is going to be deprecated anyway
| true
|
2,879,700,245
|
[BE] Switch `index_variable` to `torch.testing.make_tensor`
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: not user facing",
"ciflow/mps"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147893
* #147905
* __->__ #147892
As it was a long-time todo and actually ublocks using this function for MPS devices (that do not support double)
| true
|
2,879,695,691
|
[ca] side-effect free initial trace: RAII PyCompilerInterface
|
xmfan
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148042
* __->__ #147891
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,693,446
|
[ROCm] [TunableOp] Unit tests for scaled GEMM and GEMM with bias
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
Two more unit tests for TunableOp:
- Scaled GEMM
- GEMM with bias
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,879,679,122
|
Bitshift with MPS backend
|
philkr
|
closed
|
[
"triaged",
"module: correctness (silent)",
"module: mps"
] | 2
|
NONE
|
### 🐛 Describe the bug
The bitshift `<<` operation seems broken in the MPS backend
```python
import torch
1 << torch.arange(10, device="mps")
```
returns
```python
tensor([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18], device='mps:0')
```
Expected result
```python
tensor([ 1, 2, 4, 8, 16, 32, 64, 128, 256, 512], device='mps:0')
```
Other backends give the correct result, so did pytorch 2.3.1. Pytorch 2.6.0 and nightly 2.7.0 both give the wrong result. It seems reproducible across devices.
### Versions
PyTorch version: 2.7.0.dev20250224
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.4 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.23.1
Libc version: N/A
Python version: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:13:44) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.4-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.0
[pip3] torch==2.7.0.dev20250224
[pip3] torchaudio==2.6.0.dev20250224
[pip3] torchvision==0.22.0.dev20250224
[conda] numpy 2.0.0 pypi_0 pypi
[conda] torch 2.7.0.dev20250224 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250224 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250224 pypi_0 pypi
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,879,623,728
|
[logs][qol] Print log options alphabetically
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147824
* __->__ #147888
| true
|
2,879,621,532
|
DISABLED test_inductor_reduce_scatter_tensor_coalesced (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d"
] | 17
|
NONE
|
Platforms: inductor, rocm, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_reduce_scatter_tensor_coalesced&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37797588417).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_reduce_scatter_tensor_coalesced`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/test_c10d_functional_native.py`
cc @clee2000 @wdvr
| true
|
2,879,608,515
|
[scan] User-facing reverse flag handling
|
bohnstingl
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 4
|
COLLABORATOR
|
This PR removes the reverse flag from the backend implementation and resolves it via `torch.flip` in the frontend.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @ydwu4
| true
|
2,879,570,254
|
[inductor][ck] kBatch parametrized
|
coconutruben
|
closed
|
[
"module: rocm",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary:
# Why
Enable us to set the kBatch parameter, rather than bake it in
Especially for larger splitK scenarios, this can yield very good performance (up to 1.5x vs hipblaslt from initial tests)
## Why like this
The obvious question should be: why not add this to the op itself, and maybe even into the template/kernel. That would simplify the code.
The choice to have it as a "runtime" param that we fix is be able to reuse the compiled CK `.so` libraries, as now multiple choices of kBatch can be used with the exact same `.so` (as the shared library does not depend on kBatch, but takes it as a parameter)
# What
- copy cutlass approach for swizzle to have a "runtime" arg that we pass in but is really choice dependent
- pipe through everything from template and kernel
- hard-code it to be kBatch=1 for now (same as before, just now settable)
This is part of a series of Diffs, where next we need to figure out
1. how to filter out ops + kBatch that don't work
2. set this better for splitK scenarios (hand written heuristic)
Test Plan:
(with minor modifications)
```
# show it working with AOTI
buck2 run mode/opt-amd-gpu //scripts/henrylhtsang/repros:aot
```
```
# show it working with inductor only
buck2 run -c fbcode.re_gpu_tests=False mode/opt-amd-gpu fbcode//deeplearning/aot_inductor/benchmark/sampling:test_gemm_autotune_benchmark_AMD_block_0
```
Differential Revision: D70200008
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,879,553,231
|
`FxGraphDrawer` fails on `einsum` nodes
|
f-dangel
|
open
|
[
"triaged",
"module: fx"
] | 0
|
NONE
|
### 🐛 Describe the bug
I am trying to visualize a `torch.fx.GraphModule` using `torch.fx.passes.graph_drawer.FxGraphDrawer`.
If the graph module contains an `einsum` operation, I get a `bad label` error.
Here is an MWE to reproduce the problem:
```python
"""Cannot visualize `einsum` nodes with `torch.fx` graph drawer."""
from torch import einsum
from torch.fx import passes, symbolic_trace
from torch.nn import Module
# Setting this to `True` triggers the error.
# Everything works fine if set to `False`.
USE_EINSUM = True
class Square(Module):
def forward(self, x):
if USE_EINSUM:
return einsum("a,a->a", x)
else:
return x**2
mod = symbolic_trace(Square())
g = passes.graph_drawer.FxGraphDrawer(mod, "einsum")
g.get_dot_graph().write_svg("einsum.svg")
```
Here is an example of the error:
```bash
b'Error: bad label format {name=%einsum|op_code=call_function\\n|target=torch.functional.einsum\\n|args=(a,a->a,)|num_users=1\\n}\n'
```
### Versions
```
PyTorch version: 2.4.0.post101
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3.1 (x86_64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.31.0
Libc version: N/A
Python version: 3.9.16 (main, May 15 2023, 18:51:40) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] flake8-bugbear==24.12.12
[pip3] flake8-comprehensions==3.16.0
[pip3] flake8-tidy-imports==4.11.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.1
[pip3] torch==2.4.0.post101
[conda] libtorch 2.4.0 cpu_mkl_hdbae018_101 conda-forge
[conda] mkl 2023.2.0 h54c2260_50500 conda-forge
[conda] numpy 2.0.2 pypi_0 pypi
[conda] numpy-base 2.0.1 py39h03d8c7d_1
[conda] pytorch 2.4.0 cpu_mkl_py39hd2dbf71_101 conda-forge
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,879,531,469
|
[aotd] Log torch._functorch.config in tlparse
|
IvanKobzarev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147883
Adding torch._functorch.config to tlparse for better debugability.
E.g. https://github.com/pytorch/pytorch/pull/147638 happened only with `torch._functorch.config.view_replay_for_aliased_outputs=False` which is True by defautl
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,510,591
|
[CI] add missing matrix cases for `pytorch-linux-focal-py{3.12,3.13}-clang10`
|
XuehaiPan
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147882
These two images are referenced here:
https://github.com/pytorch/pytorch/blob/adf0f4ffd24eac6bf0c49d49c82a2d0e988196c0/.github/workflows/docker-builds.yml#L57-L60
https://github.com/pytorch/pytorch/blob/adf0f4ffd24eac6bf0c49d49c82a2d0e988196c0/.github/workflows/pull.yml#L517
https://github.com/pytorch/pytorch/blob/adf0f4ffd24eac6bf0c49d49c82a2d0e988196c0/.github/workflows/pull.yml#L224
| true
|
2,879,381,680
|
[export][dynamic shapes] add Dim._OBLIVIOUS, _mark_oblivious()
|
pianpwk
|
open
|
[
"fb-exported",
"Stale",
"ciflow/trunk",
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 10
|
CONTRIBUTOR
|
Summary: Adds `Dim._OBLIVIOUS` in export dynamic shapes, and `_mark_oblivious()` in dynamo decorators, to support the use of OBLIVIOUS_SIZE.
The semantics are that we allocate what looks like a unbacked symbol, but is technically backed; it contains a hint, the user-intention is just to opt into size-oblivious reasoning and avoid 0/1 specialization.
Decided to do this over mark_unbacked + hint because it's easier to write code in symbolic_shapes that distinguishes between valid reasoning for oblivious sizes and general unbacked (e.g. we can set replacements for oblivious sizes since they're graph inputs, and we don't face the "time traveling" problem): https://github.com/pytorch/pytorch/blob/3a69dee955f2c6c57f7c879ba82469fa0c1d0b74/torch/fx/experimental/symbolic_shapes.py#L6284-L6297
On the other hand if we just handled this with unbacked + hint, it's hard to tell if we're dealing with input sizes that should have hints, or if we've just been doing real-tensor prop.
Test Plan: test_export
Differential Revision: D70193972
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,361,387
|
[dynamo] add sourceless builder for `types.MethodType`
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 16
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148151
* #113258
* #113257
* __->__ #147880
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,317,666
|
Flex Attention is incompatible with selective AC
|
fegin
|
open
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When using FlexAttention with selective activation checkpointing, we got an error as below
```
traceback : Traceback (most recent call last):
File "/data/users/chienchin/mywork/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 354, in wrapper
return f(*args, **kwargs)
File "/data/users/chienchin/fbsource/fbcode/pytorch/torchtitan/train.py", line 306, in main
pred = model(input_ids)
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1857, in _call_impl
return inner()
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1805, in inner
result = forward_call(*args, **kwargs)
File "/data/users/chienchin/fbsource/fbcode/pytorch/torchtitan/torchtitan/models/llama/model.py", line 478, in forward
h = layer(h, self.freqs_cis)
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1857, in _call_impl
return inner()
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1805, in inner
result = forward_call(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/distributed/algorithms/_checkpoint/checkpoint_wrapper.py", line 171, in forward
return self.checkpoint_fn( # type: ignore[misc]
File "/data/users/chienchin/mywork/pytorch/torch/_compile.py", line 51, in inner
return disable_fn(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/_dynamo/eval_frame.py", line 764, in _fn
return fn(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/utils/checkpoint.py", line 495, in checkpoint
ret = function(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/chienchin/fbsource/fbcode/pytorch/torchtitan/torchtitan/models/llama/model.py", line 359, in forward
h = x + self.attention(self.attention_norm(x), freqs_cis)
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/data/users/chienchin/fbsource/fbcode/pytorch/torchtitan/torchtitan/models/llama/model.py", line 230, in forward
output = flex_attention(xq, xk, xv, block_mask=self.block_mask)
File "/data/users/chienchin/mywork/pytorch/torch/nn/attention/flex_attention.py", line 1357, in flex_attention
out, lse = torch.compile(
File "/data/users/chienchin/mywork/pytorch/torch/_dynamo/eval_frame.py", line 585, in _fn
return fn(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/nn/attention/flex_attention.py", line 1345, in _flex_attention_hop_wrapper
return flex_attention_hop(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/chienchin/mywork/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/chienchin/mywork/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/chienchin/mywork/pytorch/torch/_ops.py", line 455, in dispatch
return kernel(*args, **kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/_higher_order_ops/flex_attention.py", line 744, in flex_attention_autograd
out, logsumexp = FlexAttentionAutogradOp.apply(
File "/data/users/chienchin/mywork/pytorch/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/data/users/chienchin/mywork/pytorch/torch/_higher_order_ops/flex_attention.py", line 610, in forward
out, logsumexp = flex_attention(
File "/data/users/chienchin/mywork/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/chienchin/mywork/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/chienchin/mywork/pytorch/torch/_ops.py", line 462, in wrapper
return torch.overrides.handle_torch_function(
File "/data/users/chienchin/mywork/pytorch/torch/overrides.py", line 1721, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/data/users/chienchin/mywork/pytorch/torch/_dynamo/_trace_wrapped_higher_order_op.py", line 142, in __torch_function__
return func(*args, **(kwargs or {}))
File "/data/users/chienchin/mywork/pytorch/torch/_higher_order_ops/flex_attention.py", line 92, in __call__
return super().__call__(
File "/data/users/chienchin/mywork/pytorch/torch/_ops.py", line 471, in __call__
return wrapper()
File "/data/users/chienchin/mywork/pytorch/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/data/users/chienchin/mywork/pytorch/torch/_ops.py", line 365, in dispatch
raise NotImplementedError(
NotImplementedError: There was no rule registered for HOP flex_attention and mode <torch.utils.checkpoint._CachingTorchDispatchMode object at 0x7f3e5cc0fac0>. We recommend filing an issue.
```
This issue can be reproduced with https://github.com/pytorch/torchtitan/pull/887 and `CONFIG_FILE="./train_configs/llama3_8b.toml" ./run_llama_train.sh --model.use_flex_attn`
Note that full activation checkpointing doesn't cause this issue.
### Versions
nightly
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,879,312,751
|
follow up to #147548, fix regression on MI300
|
jeffdaily
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 10
|
COLLABORATOR
|
Removing curly braces seemed superficial but broke MI300 rowwise matmul.
| true
|
2,879,309,575
|
[MPS] faster integer batched matmul
|
Isalia20
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: performance",
"release notes: mps",
"ciflow/mps"
] | 6
|
COLLABORATOR
|
Followup to #147526
Tiled matmul for bmm as well.
## Speed ups:

Script to record times:
```python
import torch
import numpy as np
import time
import csv
batch_sizes = [1, 2, 4, 8]
matrix_sizes = [256, 512, 1024, 2048]
num_runs = 10
warmup_runs = 3
def run_int_mm(A, B):
torch.mps.synchronize()
start = time.perf_counter()
c = A @ B
torch.mps.synchronize()
end = time.perf_counter()
return c, end - start
results = {
'N': [],
'B': [],
'mean_time': [],
'std_time': []
}
for b in batch_sizes:
for n in matrix_sizes:
print(f"\nBenchmarking N={n} and B={b}")
try:
A_mps = torch.randint(low=-100, high=100, size=(b, n, n), dtype=torch.int8, device="mps")
B_mps = torch.randint(low=-100, high=100, size=(b, n, n), dtype=torch.int8, device="mps")
for _ in range(warmup_runs):
_, _ = run_int_mm(A_mps, B_mps)
times = []
for _ in range(num_runs):
_, t = run_int_mm(A_mps, B_mps)
times.append(t)
mean_time = np.mean(times)
std_time = np.std(times)
results['N'].append(n)
results['B'].append(b)
results['mean_time'].append(mean_time)
results['std_time'].append(std_time)
print(f"Mean time: {mean_time:.4f}s ± {std_time:.4f}s")
except RuntimeError as e:
print(f"Error for N={n}: {e}")
continue
with open('int_bmm_benchmark_times_new.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['N', 'batch', 'mean_time', 'std_time'])
for i in range(len(results['N'])):
writer.writerow([
results['N'][i],
results['B'][i],
results['mean_time'][i],
results['std_time'][i]
])
```
| true
|
2,879,296,924
|
[inductor] Implement max_pool2d_with_indices as a reduction for large window sizes
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148210
* #148209
* __->__ #147876
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,879,278,235
|
roundtrip cast between float32|bfloat16 and e8m0 should work in torchinductor
|
vkuzo
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We should make sure that float32|bfloat16 -> e8m0 and back cast works in torchinductor:
```python
import torch
dtype = torch.float8_e8m0fnu
hp_dtype = torch.float32 # and torch.bfloat16
def foo(x0):
x1 = x0.to(dtype)
x2 = x1.to(hp_dtype)
return x2
x0 = torch.randn(16, 16, device=device, dtype=hp_dtype)
foo_c = torch.compile(foo, backend="inductor", fullgraph=True)
with torch.no_grad():
y_c = foo_c(x0)
```
* Today, this fails with the following error message: https://gist.github.com/vkuzo/e6ab922d3ddabec8d9f7836d56d58712
* A failing, skipped test case for this behavior is being added in https://github.com/pytorch/pytorch/pull/147770
This is important for the PT2 support of MX workflows (tracked in https://github.com/pytorch/ao/issues/556). Specifically, once this functionality exists, a user would be able to write a scaling+casting kernel for MX and directly cast from float32 to e8m0, without having to implement this cast themselves with bit shifting. The semantics of the cast to e8m0 are described in detail in https://github.com/pytorch/pytorch/issues/146414.
### Versions
main branch
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi
| true
|
2,879,268,609
|
Does CUDACachingAllocator.cpp still require deferred event creation?
|
galv
|
open
|
[
"module: cuda",
"triaged",
"module: CUDACachingAllocator"
] | 2
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
This commit back in 2017 changed the cuda caching allocator pretty drastically: https://github.com/pytorch/pytorch/commit/07f5b21ef1bd29d1451c616062dcbfc3f8fd7c6a
The previous one had the following semantics:
- The user would call CachingHostAllocator_recordEvent() after every usage of a pinned memory allocation (this actually happens in only one place, in Copy.cu, right now). This would create a cudaEvent_t after every time recordEvent() was called. Note that cudaEventCreateWithFlags can be a potentially expensive call in terms of time taken and whether it takes locks.
- A memory block was "safe" to free only if cudaEventQuery() on all events recorded for that block returned cudaSuccess.
The one after that commit has the following semantics:
- The user would still call CachingHostAllocator_recordEvent() after every usage of a pinned memory allocation, but no cuda event would be created. Instead, the cuda stream on which the usage occurred would be saved into a hashed set. No event would be created.
- Instead, event creation would be deferred until free() was called, with the events being recorded on all streams the pinned memory was used on. This means that the corresponding events could potentially include kernels that happen after the call to CachingHostAllocator_recordEvent(), so after this "optimization", the event becomes less precise. My hunch is that this can potentially increase pinned memory usage, though I don't have an example right now.
The referenced PR https://github.com/pytorch/pytorch/pull/702 is not a PR; it's an issue. So I can't read it to try to understand the motivation better. @soumith do you have any idea what happened here? Was there an old repo for pytorch before the current one?
Regardless, I think the goal of that PR was to remove cudaEventCreate from the critical path, by moving it to free() rather than CachingHostAllocator_recordEvent(). However @ajtulloch made a PR that creates a cache of cuda events in #69299 that does this in a more correct way. There is no reason why free() wouldn't necessarily be on the critical path (i.e., blocking your cpu from launching more cuda kernels) in pythonic pytorch code. (e.g., it would happen every time a pinned tensor goes out of scope if there are no references to it). So Andrew's PR is much better.
Context is that I am working on adding cuda stream graph capture support to code calling torch.Tensor.pin_memory(). cuda events can no longer be used to check for whether all usages of a piece of pinned memory are allocated. I believe I know the right way (see whether a path exists from the kernel using the pinned allocation to the node just before the free() call), but certainly the above behavior was quite confusing to me.
To be frank, the CUDACachingAllocator.cpp file has expanded in size and complexity over the years, so I'm not sure about removing this old optimization, for fear of messing something up. But I wanted to document the concern.
### Alternatives
_No response_
### Additional context
_No response_
cc @ptrblck @msaroufim @eqy
| true
|
2,879,263,484
|
returning tensors of dtype torch.float8_e8m0fnu should work with torchinductor
|
vkuzo
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
We should make sure the following works:
```python
import torch
dtype = torch.float8_e8m0fnu
device = "cuda"
def foo(x0):
x1 = x0 + 1
x2 = x1.view(dtype)
return x2
x0 = torch.randint(0, 255, (16, 16), device=device, dtype=torch.uint8)
foo_c = torch.compile(foo, backend="inductor", fullgraph=True)
with torch.no_grad():
y_c = foo_c(x0)
```
* Today, this fails with the following error message: https://gist.github.com/vkuzo/d2f560d34b7c68fc89671fa8f80f6294
* A failing, skipped test case for this behavior is being added in https://github.com/pytorch/pytorch/pull/147770
This is important for the PT2 support of MX workflows (tracked in https://github.com/pytorch/ao/issues/556). Specifically, once this functionality exists, a user would be able to write a scaling+casting kernel for MX and output the scales directly in the e8m0 dtype, instead of having to output in uint8 and view as e8m0 afterwards.
### Versions
main branch
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,879,252,020
|
[dynamo] add context manager debug information to graph breaks
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compile ux"
] | 15
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147912
* __->__ #147872
* #147494
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,220,295
|
[dynamo] Plumb HOP debug info into side_effects
|
williamwen42
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"module: higher order operators",
"module: pt2-dispatcher",
"dynamo-side-effects",
"module: compile ux"
] | 0
|
MEMBER
|
See https://github.com/pytorch/pytorch/pull/147385#discussion_r1967836734
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @ydwu4 @bdhirsh
| true
|
2,879,218,232
|
[Not4Land] test `optree` with HEAD version
|
XuehaiPan
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: pytree",
"not4land",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"ci-test-showlocals"
] | 2
|
COLLABORATOR
|
cc @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,879,207,520
|
[dtensor] refactor sharding prop to handle cross mesh computation
|
wanchaol
|
closed
|
[
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (dtensor)"
] | 3
|
COLLABORATOR
|
as titled, this PR moves the same mesh check from the sharding propagation level to each individual operator level.
This is to allow more flexibility for each individual operator to check the operator can be run on the same mesh or not. For example, before this PR if user have two DTensor params that lives on different DeviceMesh, and want to run `for_each` operator on them individually, it would error out with cross mesh error. But for foreach computation there could be DTensors that live on different meshes, as long as the the mesh are the same in a "zipped way".
This should also fix https://github.com/pytorch/pytorch/issues/134212
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,879,130,860
|
Track follow ups to #147354
|
mikaylagawarecki
|
open
|
[
"module: internals",
"triaged"
] | 0
|
CONTRIBUTOR
|
Filing issue to track https://github.com/pytorch/pytorch/pull/147354#pullrequestreview-2631005924
tl;dr inplace `Tensor.set_(storage)` (except for the [meta symint variant](https://github.com/pytorch/pytorch/blob/346bbefa630b58fda5453373e2a3bdcc32236a16/aten/src/ATen/native/TensorShape.cpp#L397) which seems to properly handle this) would
- unsafely set the storage offset
https://github.com/pytorch/pytorch/blob/346bbefa630b58fda5453373e2a3bdcc32236a16/aten/src/ATen/native/TensorShape.cpp#L383
- call resize_, which would skip resizing if the sizes and strides were unchanged
https://github.com/pytorch/pytorch/blob/346bbefa630b58fda5453373e2a3bdcc32236a16/aten/src/ATen/native/Resize.cpp#L204-L206
**This is reachable from the weights only unpickler and it was found that this can be used to trigger out of bounds accesses.**
To fix this I added a check to make sure the storage is within bounds if the size/stride don't change using `checkInBoundsForStorage`
However despite this function already being symintified, there were two points within `checkStorageInBounds` that caused "Could not guard on data-dependent expression" issues. Per ttps://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing I made the following changes
(1) `storage_size_bytes == 0` https://github.com/pytorch/pytorch/blob/346bbefa630b58fda5453373e2a3bdcc32236a16/aten/src/ATen/native/Resize.h#L91-L93
I made this use `TORCH_GUARD_SIZE_OBLIVIOUS` to make the early return evaluate to False when compiling
https://github.com/pytorch/pytorch/blob/8e4decdb6e3f4f70488d4967de1dffabe96c9064/aten/src/ATen/native/Resize.h#L104-L106
(2) The TORCH_CHECK to make sure storage size + offset <= size of the new storage
https://github.com/pytorch/pytorch/blob/346bbefa630b58fda5453373e2a3bdcc32236a16/aten/src/ATen/native/Resize.h#L96-L110
I changed this to a TORCH_SYM_CHECK to make this a deferred runtime assert, however, because the earlier storage_size_bytes == 0 check is wrapped in TORCH_GUARD_SIZE_OBLIVIOUS, I made the condition the following
https://github.com/pytorch/pytorch/blob/8e4decdb6e3f4f70488d4967de1dffabe96c9064/aten/src/ATen/native/Resize.h#L109-L123
where TORCH_MAYBE_SYM_CHECK is defined as such https://github.com/pytorch/pytorch/blob/8e4decdb6e3f4f70488d4967de1dffabe96c9064/c10/core/SymBool.h#L95-L100
sym_eq/sym_le with int64_t arguments would return bool and in order to or the bools I added logic for logical or (||) to SymBool.h
https://github.com/pytorch/pytorch/blob/8e4decdb6e3f4f70488d4967de1dffabe96c9064/c10/core/SymBool.h#L52-L54
iiuc the current bitwise or`|` is actually implemented as logical or s(but I had to add || as well since for the bool || bool case, there is a rule that makes sure bools are not being bitwise or-ed in our mobile builds)
error: use of bitwise '|' with boolean operands [-Werror,-Wbitwise-instead-of-logical]
This issue is to track the changes made and make any appropriate fixes when Ed is back from leave
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @albanD
| true
|
2,878,994,133
|
[Inductor][Tests] Update `get_divisible_by_16` function in `test_torchinductor.py` to work correctly with new Triton
|
anmyachev
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 4
|
COLLABORATOR
|
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,878,993,657
|
Parallelize bf16->f32 conversion for gemm(bf16:bf16->bf16)
|
aditew01
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"module: arm",
"topic: not user facing"
] | 2
|
COLLABORATOR
|
Improves performance for at::addmm / linear kernels when executed in dtype=bfloat16 and when SBGEMM is available.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,878,987,993
|
[export] Add support for invoke_subgraph
|
angelayi
|
closed
|
[
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: export"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147992
* __->__ #147863
* #147862
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,878,987,765
|
Add some more meta kernels
|
angelayi
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147862
| true
|
2,878,922,237
|
[Resubmit] Record input strides at time of tracing, constrain to them for triton fn
|
eellison
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147861
Resubmit of https://github.com/pytorch/pytorch/pull/145448. it lost its changes on rebase.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,878,919,076
|
addmv bfloat16 accuracy issues on cpu
|
AnthonyBarbier
|
closed
|
[
"triaged",
"module: bfloat16",
"module: linear algebra",
"module: correctness (silent)",
"module: arm",
"module: intel"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Significant inaccuracy when using addmv composite opinstead of individual ops with bf16 inputs :
```python
import torch
b = torch.tensor([5.8438, -6.3125], dtype=torch.bfloat16)
A = torch.tensor(
[
[4.3125, -6.4375, -3.8125, 4.3125, 8.2500, -5.4062, -3.2656, -5.4688, 6.1562, -2.9062],
[7.0000, 0.7617, 5.1875, -4.9375, 4.5625, -6.5938, -0.9023, 6.4375, -2.4219, -3.3906],
],
dtype=torch.bfloat16,
)
x = torch.tensor(
[5.6875, -8.0000, 8.0625, 5.0000, -4.7812, 7.4375, -1.4766, 3.2344, 1.9688, -3.3125],
dtype=torch.bfloat16,
)
c = b + (A @ x)
print(f"b + (A @ x) = {c}")
out = torch.addmv(b, A, x, alpha=1, beta=0)
print(f"addmv(beta=0) + b bfloat {out+b}")
print(f"addmv(beta=0) + b fp32 {(out.float()+b.float()).bfloat16()}")
out = torch.addmv(b, A, x, alpha=1, beta=1)
print(f"addmv bfloat {out}")
```
Output on x86 with https://download.pytorch.org/whl/nightly/cpu/torch-2.7.0.dev20250224%2Bcpu-cp313-cp313-manylinux_2_28_x86_64.whl
```
b + (A @ x) = tensor([1.9219, 2.3125], dtype=torch.bfloat16)
addmv(beta=0) + b bfloat tensor([1.9219, 2.3125], dtype=torch.bfloat16)
addmv(beta=0) + b fp32 tensor([1.9219, 2.3125], dtype=torch.bfloat16)
addmv bfloat tensor([1.6562, 2.1562], dtype=torch.bfloat16)
```
Output on Graviton 3 with https://download.pytorch.org/whl/nightly/cpu/torch-2.7.0.dev20250224%2Bcpu-cp310-cp310-manylinux_2_28_aarch64.whl
```
b + (A @ x) = tensor([1.9219, 2.3125], dtype=torch.bfloat16)
addmv(beta=0) + b bfloat tensor([1.9219, 2.3125], dtype=torch.bfloat16)
addmv(beta=0) + b fp32 tensor([1.9219, 2.3125], dtype=torch.bfloat16)
addmv bfloat tensor([1.6562, 2.1562], dtype=torch.bfloat16)
```
It doesn't seem to be an accumulation issue as it only affects the "add" part.
I thought maybe it was an issue to do with the accumulator being fp32 which is why I tried to manually upcast the operands for the add but that didn't change anything.
### Versions
x86 Environment:
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250224+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: EndeavourOS Linux (x86_64)
GCC version: (GCC) 14.2.1 20250207
Clang version: Could not collect
CMake version: version 3.31.5
Libc version: glibc-2.41
Python version: 3.13.2 (main, Feb 5 2025, 08:05:21) [GCC 14.2.1 20250128] (64-bit runtime)
Python platform: Linux-6.13.4-arch1-1-x86_64-with-glibc2.41
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 570.86.16
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 24%
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 4224.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (12 instances)
L1i cache: 512 KiB (12 instances)
L2 cache: 12 MiB (9 instances)
L3 cache: 25 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.7.0.dev20250224+cpu
[pip3] torchaudio==2.6.0.dev20250224+cpu
[pip3] torchvision==0.22.0.dev20250224+cpu
[conda] Could not collect
```
Aarch64 environment:
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250224+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 1 MiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] torch==2.7.0.dev20250224+cpu
[pip3] torchaudio==2.6.0.dev20250224
[pip3] torchvision==0.22.0.dev20250224
[conda] Could not collect
```
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @malfet @snadampal @milpuz01 @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,878,861,329
|
Exporting a PyTorch Model with Dynamic ModuleList Indexing to ONNX
|
tso2381637
|
open
|
[
"module: onnx",
"triaged"
] | 0
|
NONE
|
Description:
I have a PyTorch model that contains a torch.nn.ModuleList with multiple torch.nn.Linear layers. The forward pass selects a specific layer dynamically based on an index input. Below is the model definition:
```
import torch
class TorchModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.ModuleList([torch.nn.Linear(10, 10) for _ in range(10)])
def forward(self, x, i):
return self.fc1[i](x)
```
I want to export this model to ONNX while preserving the dynamic indexing (self.fc1[i]) so that I can perform inference in ONNX with a variable index value. However, ONNX does not natively support dynamic indexing for ModuleList.
Is there a way to export this model to ONNX while ensuring that the entire computation graph, including dynamic layer selection, is preserved? If not, what are the possible workarounds to achieve similar functionality in ONNX?
| true
|
2,878,790,729
|
[fix]: Offload OpenBLAS gemv calls to dedicated OpenBLAS kernel
|
nikhil-arm
|
open
|
[
"open source"
] | 5
|
COLLABORATOR
|
Description:
1. Directly call mv and addmv call instead of re-routing via addmm
2. Avoid weight transpose as mv and addmv does not require it
Improvement: 14% perf improvement for gemv operator on shape of 1 4096 4096
Tester Script:
```
import torch
import torch.nn as nn
import torch.profiler as profiler
from time import time
import numpy as np
import sys
torch.manual_seed(0)
M = 1
K = 4096
N = 4096
bias = 1
dtype=torch.float32
class Net(nn.Module):
def __init__(self, K, N):
super(Net, self).__init__()
b = (bias == 1)
self.linear = torch.nn.Linear(K, N, bias=b, dtype=dtype)
def forward(self, x):
return self.linear(x)
model = Net(K, N)
model.eval()
input = torch.randn(M, K, dtype=dtype)
for _ in range(5):
model(input)
with profiler.profile(with_stack=True, profile_memory=False, record_shapes=True) as prof:
for _ in range(10000):
outputs = model(input)
print(prof.key_averages(group_by_input_shape=True).table(sort_by='self_cpu_time_total', row_limit=50))
print("Output Shape ", outputs.shape)
```
Change-Id: Ia0fc13c61fc63e5c01485958d12ea65aab50aa2f
Fixes #ISSUE_NUMBER
| true
|
2,878,573,749
|
Triton aarch64 and triton sbsa
|
johnnynunez
|
closed
|
[
"oncall: releng"
] | 1
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
Why?
runners:
Github: https://github.blog/changelog/2025-01-16-linux-arm64-hosted-runners-now-available-for-free-in-public-repositories-public-preview/
windows arm q2 2025: https://github.com/github/roadmap/issues/1098
Devices:
GH200 and future devices: Digits, jetson thor, cuda arm laptops are coming
Nvidia is merging SBSA and ARM64 together
### Alternatives
alls is in x86_64 https://download.pytorch.org/whl/nightly/pytorch-triton/
alternatives, at this moment for jetson: https://github.com/dusty-nv/jetson-containers/tree/master/packages/ml/triton
### Additional context
useful for use in github arm runners like flash attention repository for GH200
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @snadampal @milpuz01 @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov
| true
|
2,878,566,821
|
[BE] Parameterize TestSDPA in test_mps.py
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147856
| true
|
2,878,535,154
|
[DO NOT MERGE] Migrate from oneDNN Inner Product to oneDNN MatMul for mkldnn_linear and mkldnn_linear_backward
|
jiayisunx
|
open
|
[
"module: cpu",
"module: mkldnn",
"open source",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147855
* #147360
* #147073
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,878,495,103
|
[ONNX] BitwiseOr was generated for bool inputs (invalid)
|
JuntaoLiu01
|
open
|
[
"module: onnx",
"triaged"
] | 27
|
NONE
|
### 🐛 Describe the bug
```python
def trans2onnx_v2(torch_model, onnx_path):
image = torch.randn(1, 3, 640, 640)
mask = torch.randint(0, 1, (1, 1, 640, 640), dtype=torch.int64)
image = image.cuda()
mask = mask.cuda()
# work ok
onnx_program = torch.onnx.dynamo_export(torch_model,
image, mask)
# fail
export_options = torch.onnx.ExportOptions(dynamic_shapes=True)
onnx_program = torch.onnx.dynamo_export(torch_model,
image, mask, export_options=export_options)
onnx_program.save(onnx_path)
```
when export with
```python
onnx_program = torch.onnx.dynamo_export(torch_model,
image, mask)
```
the program works OK!
but errors occur when run with:
```python
export_options = torch.onnx.ExportOptions(dynamic_shapes=True)
onnx_program = torch.onnx.dynamo_export(torch_model,
image, mask, export_options=export_options)
```
the error is:
```
torch._dynamo.exc.Unsupported: unsupported operator: aten._fft_r2c.default
```
### Versions
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
Clang version: 13.0.1 (Red Hat 13.0.1-2.module+el8.6.0+37+eac49f58)
Libc version: glibc-2.28
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.119-19-0009.11-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.104.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.2.0
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.11 py310h5eee18b_0
[conda] mkl_random 1.2.8 py310h1128e8f_0
[conda] numpy 2.0.1 py310h5f9d8c6_1
[conda] numpy-base 2.0.1 py310hb5e798b_1
[conda] pytorch 2.4.0 py3.10_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtriton 3.0.0 py310 pytorch
[conda] torchvision 0.19.0 py310_cu121 pytorch
| true
|
2,878,308,461
|
DISABLED test_mixed_mm_exhaustive_dtypes (__main__.TestPatternMatcher)
|
pytorch-bot[bot]
|
open
|
[
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mixed_mm_exhaustive_dtypes&suite=TestPatternMatcher&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37769671030).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 5 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mixed_mm_exhaustive_dtypes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 405, in test_mixed_mm_exhaustive_dtypes
self._test_mixed_impl(fn, args, True, False, rtol=0.16, atol=1e-4)
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 333, in _test_mixed_impl
FileCheck().check("k_idx").check(".to(").check("tl.dot").run(code)
RuntimeError: Expected to find ".to(" but did not find it
Searched string:
acc = tl.zeros((BLOCK_M, BLOCK_N), dtype=ACC_TYPE)
for k_idx in range(0, tl.cdiv(K, BLOCK_K)):
a_k_idx_vals = offs_k[None, :] + (k_idx * BLOCK_K)
b_k_idx_vals = offs_k[:, None] + (k_idx * BLOCK_K)
idx_m = offs_a_m[:, None]
idx_n = a_k_idx_vals
xindex = idx_n + 256*idx_m
a = tl.load(A + (xindex))
idx_m = b_k_idx_vals
idx_n = offs_b_n[None, :]
xindex = idx_n + 256*idx_m
b = tl.load(B + (xindex))
acc += tl.dot(a, b, allow_tf32=ALLOW_TF32)
# rematerialize rm and rn to save registers
rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M)
rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N)
idx_m = rm[:, None]
idx_n = rn[None, :]
mask = (idx_m < M) & (idx_n < N)
# inductor generates a suffix
xindex = idx_n + 256*idx_m
tl.store(out_ptr0 + (tl.broadcast_to(xindex, acc.shape)), acc, mask)
''', device_str='cuda')
meta0 = {'GROUP_M': 8, 'EVEN_K': True, 'ALLOW_TF32': 'False', 'ACC_TYPE': 'tl.float32', 'BLOCK_M': 64, 'BLOCK_N': 32, 'BLOCK_K': 128, 'matrix_instr_nonkdim': 16}
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (256, 256), (256, 1))
assert_size_stride(arg1_1, (256, 256), (256, 1))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
buf0 = empty_strided_cuda((256, 256), (256, 1), torch.float16)
# Topologically Sorted Source Nodes: [to], Original ATen: [aten._to_copy]
stream0 = get_raw_stream(0)
triton_poi_fused__to_copy_0.run(arg0_1, buf0, 65536, grid=grid(65536), stream=stream0)
del arg0_1
buf1 = empty_strided_cuda((256, 256), (256, 1), torch.float16)
# Topologically Sorted Source Nodes: [to, mm], Original ATen: [aten._to_copy, aten.mm]
stream0 = get_raw_stream(0)
triton_tem_fused__to_copy_mm_1.run(arg1_1, buf0, buf1, grid=torch._inductor.kernel.mm_common.mm_grid(256, 256, meta0), stream=stream0)
del arg1_1
del buf0
return (buf1, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((256, 256), (256, 1), device='cuda:0', dtype=torch.int8)
arg1_1 = rand_strided((256, 256), (256, 1), device='cuda:0', dtype=torch.float16)
fn = lambda: call([arg0_1, arg1_1])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: .to(
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_pattern_matcher.py TestPatternMatcher.test_mixed_mm_exhaustive_dtypes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_pattern_matcher.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,878,308,327
|
DISABLED test_inductor_inplace_op_on_view (__main__.CompileTest)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: c10d",
"oncall: pt2"
] | 19
|
NONE
|
Platforms: inductor, rocm, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inductor_inplace_op_on_view&suite=CompileTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37772363869).
Over the past 3 hours, it has been determined flaky in 20 workflow(s) with 40 failures and 20 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inductor_inplace_op_on_view`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `distributed/test_c10d_functional_native.py`
cc @clee2000 @wdvr @chauhang @penguinwu
| true
|
2,878,232,001
|
expandable_segments does not work for CUDAPluggableAllocator + MemPool
|
youkaichao
|
open
|
[
"module: cuda",
"triaged",
"module: CUDACachingAllocator"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
Here is an example code:
```python
import torch
import torch.utils.cpp_extension
cpp_sources = """
// save as alloc.cc
// compile with g++ alloc.cc -o alloc.so -I/usr/local/cuda/include -shared -fPIC
#include <sys/types.h>
#include <cuda_runtime_api.h>
#include <iostream>
// Compile with g++ alloc.cc -o alloc.so -I/usr/local/cuda/include -shared -fPIC
extern "C" {
void* my_malloc(ssize_t size, int device, cudaStream_t stream) {
void *ptr;
cudaMalloc(&ptr, size);
std::cout<<"C side: alloc "<<ptr<< " " <<size<<std::endl;
return ptr;
}
void my_free(void* ptr, ssize_t size, int device, cudaStream_t stream) {
std::cout<<"C side: free "<<ptr<< " "<<size<<std::endl;
cudaFree(ptr);
}
// hack: add this placeholder function to let PyTorch generate module extension template
at::Tensor sin_add(at::Tensor x, at::Tensor y) {
return x.sin() + y.sin();
}
}
"""
module = torch.utils.cpp_extension.load_inline("alloc", cpp_sources, with_cuda=True, functions=['sin_add'])
so_file = module.__file__
def f():
new_alloc = torch.cuda.memory.CUDAPluggableAllocator(
so_file, 'my_malloc', 'my_free')
with torch.cuda.use_mem_pool(torch.cuda.MemPool(new_alloc._allocator)):
for factor in (1024, 1024 ** 2):
data = torch.empty((60, factor), dtype=torch.uint8, device="cuda")
del data
data = torch.empty((70, factor), dtype=torch.uint8, device="cuda")
del data
f()
import gc
gc.collect()
```
When I run it directly, I can see my allocator is called, and there are some lines printed:
```text
C side: alloc 0x7ff075e00000 2097152
C side: alloc 0x7ff058000000 62914560
C side: alloc 0x7ff052000000 73400320
```
However, when I enable expandable segment, via `PYTORCH_CUDA_ALLOC_CONF="expandable_segments:True" python test.py` , I get no output. My custom allocator is not used at all.
This is reported by vllm users who use vllm's sleep mode. see https://github.com/vllm-project/vllm/pull/11743#issuecomment-2681730438 .
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1071-nvidia-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 570.86.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @ptrblck @msaroufim @eqy
| true
|
2,878,224,624
|
The issue where opt_output in fx_graph_runnable.py is inconsistent with the actual output when testing run_repro(acc=True)
|
MovieTrack
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
Conclusion
✔ Use .clone() before modifying tensors from expand(), view(), or as_strided().
✔ Ensure tensors are .contiguous() before operations.
✔ Debug with x.is_contiguous() to check memory layout.
If the issue persists, share a code snippet for further debugging! 🚀
### Versions
Conclusion
✔ Use .clone() before modifying tensors from expand(), view(), or as_strided().
✔ Ensure tensors are .contiguous() before operations.
✔ Debug with x.is_contiguous() to check memory layout.
If the issue persists, share a code snippet for further debugging! 🚀
| true
|
2,878,219,992
|
Immediate Global State Mutation After Using `_force_original_view_tracking` Decorator
|
vwrewsge
|
open
|
[
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher",
"internal ramp-up task"
] | 1
|
NONE
|
### 🐛 Describe the bug
Similar to [# 113359](https://github.com/pytorch/pytorch/pull/113359), when using the _force_original_view_tracking decorator in PyTorch, the global state of the view replay (torch._C._is_view_replay_enabled()) is mutated immediately after the decorator is applied, even though it should not modify the global state until the decorated function is executed.
# Code
```
import torch
from torch.autograd.grad_mode import _force_original_view_tracking
# Save original view replay state
original_mode = torch._C._is_view_replay_enabled()
print(original_mode)
# Apply decorator (should NOT modify global state until function execution)
@_force_original_view_tracking(not original_mode)
def test_function(x):
return x
# Check if global state was mutated immediately after decoration
current_mode = torch._C._is_view_replay_enabled()
if current_mode != original_mode:
print("NOT THE SAME")
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,878,212,438
|
[inductor] [cpu] `torch.nn.Fold` throws assertionerror in codegen
|
shaoyuyoung
|
closed
|
[
"triaged",
"oncall: pt2",
"oncall: cpu inductor"
] | 4
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**description**: when compiling `torch.nn.Fold`, eager pass the check while inductor throws assertion error on CPU.
**device backend**: only CPP
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.fold = torch.nn.Fold(output_size=(4, 4), kernel_size=(2, 2), stride=(2, 2))
def forward(self, x):
x = self.fold(x)
return x
model = Model()
x = torch.randn(1, 4, 4)
inputs = [x]
def run_test(model, inputs, backend):
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
c_output = model(*inputs)
print(c_output)
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
```
tensor([[[[-1.4824, 0.3560, -0.1719, 0.8068],
[ 0.1535, 1.0522, -0.2272, 0.0148],
[-1.4766, -0.4049, -0.1608, 0.3579],
[ 0.5846, -1.5835, -0.9422, -0.3230]]]])
C0225 20:16:07.828000 162414 site-packages/torch/_inductor/scheduler.py:1163] [0/0] Error in codegen for ComputedBuffer(name='buf1', layout=MutationLayoutSHOULDREMOVE('cpu', torch.float32, size=[1, 1, 4, 4], stride=[16, 16, 4, 1]), data=Scatter(device=device(type='cpu'), dtype=torch.float32, inner_fn=<function ReinterpretView.make_loader.<locals>.loader at 0x7f078c1b3060>, ranges=[1, 1, 2, 2, 2, 2], output_indexer=<function index_output_size_and_inner_fn.<locals>.fn at 0x7f078c1b2fc0>, scatter_mode='atomic_add'))
AssertionError:
```
### Versions
nightly 20250225
cc @chauhang @penguinwu
| true
|
2,878,186,396
|
[inductor] [silence] inconsistent swap wih eager when compiling `torch.rot90-torch.randn_like`
|
shaoyuyoung
|
open
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**description**: this bug is triggered only when `torch.rot90` and `torch.randn_like` are used together. In my case, u can see the **second element (-2.1788)** and the **third element (-0.2934)** are swapped by inductor (compared with eager).
**device backend**: both triton and CPP
**note**: I have used `config.fallback_random = True` and `torch.manual_seed(0)`
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
torch.manual_seed(0)
x = torch.rot90(x, k=1, dims=[2, 3])
print(x)
x = torch.randn_like(x)
print(x)
return x
model = Model()
x = torch.randn(1, 1, 2, 2)
inputs = [x]
def run_test(model, inputs, backend):
if backend != "eager":
model = torch.compile(model, backend=backend)
torch.manual_seed(0)
output = model(*inputs)
return output
output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
print(torch.allclose(output, c_output, 1e-3, 1e-3, equal_nan=True))
print(torch.max(torch.abs(output - c_output)))
```
### Error logs
```
tensor([[[[-0.2934, 0.5684],
[ 1.5410, -2.1788]]]])
tensor([[[[ 1.5410, -2.1788],
[-0.2934, 0.5684]]]])
tensor([[[[-0.2934, 0.5684],
[ 1.5410, -2.1788]]]])
tensor([[[[ 1.5410, -0.2934],
[-2.1788, 0.5684]]]])
False
tensor(1.8854)
```
### Versions
nightly 20250225
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bdhirsh @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
2,878,176,533
|
`RuntimeError` not raised for `out=` argument in `torch.tensordot` with `requires_grad` tensors
|
vwrewsge
|
closed
|
[
"module: autograd",
"triaged",
"actionable"
] | 1
|
NONE
|
### 🐛 Describe the bug
When using torch.tensordot with tensors that have requires_grad=True, the function should raise a RuntimeError when the out argument is passed, as the operation does not support automatic differentiation.
# Code
```
import torch
# Create input tensors with requires_grad=True
a = torch.empty((2, 3), requires_grad=True)
b = torch.empty((3, 4), requires_grad=True)
c = torch.empty((2, 4))
# Should throw RuntimeError: "functions with out=... arguments don't support automatic differentiation"
torch.tensordot(a, b, dims=([1], [0]), out=c)
```
# Similar PR
[#117067](https://github.com/pytorch/pytorch/pull/117067)
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,878,148,689
|
Set disable_clone=True when running opt_gm
|
Danielmic
|
open
|
[
"triaged",
"open source",
"Stale",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
Fixes #147843
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,878,135,773
|
Python warnings are printed multiple times
|
vwrewsge
|
open
|
[
"oncall: jit"
] | 2
|
NONE
|
### 🐛 Describe the bug
Following the change in [PR #128581](https://github.com/pytorch/pytorch/pull/128581), Python warnings should be printed once unless the warning cache is reset. However, when running the following code, the warning appears multiple times instead of once.
```
import torch
# A function that causes the JIT tracer to emit a warning when traced
def func(x):
# Use non-deterministic operation to trigger a warning
return x + torch.rand_like(x)
for _ in range(10):
traced = torch.jit.trace(func, (torch.ones(2, 2),))
# Run the traced function to ensure tracing occurs
traced(torch.ones(2, 2))
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,878,118,128
|
The opt_output in `fx_graph_runnable.py` is inconsistent with the actual output when testing run_repro(acc=True).
|
Danielmic
|
open
|
[
"triaged",
"oncall: pt2"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
If the input tensor is created using expand(), view(), or as_strided(), cloning the input in the func same_two_models will fail. The res is mismatch with the output of directly running opt_gm(list(args)).
### Error logs
Running result.copy_(x.clone()) throws a runtime error
more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.
### Versions
PyTorch version: 2.6.0a0+ecf3bae40a.nv25.01
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 535.183.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
cc @chauhang @penguinwu
| true
|
2,878,066,810
|
[inductor] `torch.slice_scatter` throws `AssertionError` when meeting internal `float32`
|
shaoyuyoung
|
open
|
[
"good first issue",
"triaged",
"oncall: pt2",
"module: inductor"
] | 9
|
CONTRIBUTOR
|
### 🐛 Describe the bug
**description**: when meeting internal `float32` (it's `y` in my case), eager pass the check and return 0 while inductor throws an assertion error
**device**: both on triton and CPP
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
y = torch.Tensor([0]) # y dtype: torch.float32
x = torch.slice_scatter(y, x, 0)
return x
model = Model()
x = torch.Tensor([0]).to(torch.int64)
inputs = [x]
def run_test(model, inputs, backend):
model.eval()
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
c_output = model(*inputs)
print(c_output)
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')
```
### Error logs
```
tensor([0.])
LoweringException: AssertionError:
target: aten.slice_scatter.default
args[0]: TensorBox(StorageBox(
Pointwise(
'cpu',
torch.float32,
def inner_fn(index):
_ = index
tmp0 = ops.constant(0.0, torch.float32)
return tmp0
,
ranges=[1],
origin_node=full_default,
origins=OrderedSet([full_default])
)
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cpu', torch.int64, size=[1], stride=[1]))
))
```
### Versions
nightly 20250225
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
| true
|
2,878,058,590
|
AttributeError: Can't pickle local object 'make_opaque_bitwise_fn.<locals>.BitwiseFn'
|
default1360
|
open
|
[
"module: pickle",
"triaged",
"module: dynamic shapes"
] | 1
|
NONE
|
### 🐛 Describe the bug
I encountered an issue while trying to pickle an instance of a dynamically generated class using `make_opaque_bitwise_fn` from `torch.utils._sympy.functions`.
```
import pickle
import sympy
from torch.utils._sympy.functions import make_opaque_bitwise_fn
# Generate the bitwise_and function class
BitwiseFn_bitwise_and = make_opaque_bitwise_fn("bitwise_and", "and_")
# Create an instance of the dynamically generated class
x = BitwiseFn_bitwise_and(sympy.Symbol('a'), sympy.Symbol('b'))
data = pickle.dumps(x)
```
# Output
```
AttributeError: Can't pickle local object 'make_opaque_bitwise_fn.<locals>.BitwiseFn'
```
# Similar PR
https://github.com/pytorch/pytorch/pull/138395
### Versions
torch 2.6.0
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
2,878,036,896
|
`AssertionError` in `torch.compile`
|
default1360
|
closed
|
[
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 2
|
NONE
|
### 🐛 Describe the bug
When attempting to compile the `torch.norm` function using `torch.compile`, an `AssertionError` occurs.
```
import torch
compiled_norm = torch.compile(torch.norm)
```
### Versions
torch 2.6.0
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,877,958,088
|
IndexError: tuple index out of range when running vLLM script
|
qiangzaiXu
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
**Description:**
When running the provided Python script to load and generate text from a model in vllm, an error occurs during the random seed initialization.
```python
from vllm import LLM, SamplingParams
if __name__ == '__main__':
# Sample prompts
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
# Create an LLM object
llm = LLM(model="/data2/Llama-2-70b-hf", dtype="float16", tensor_parallel_size=4, enforce_eager=True, trust_remote_code=True)
# Generate texts from the prompts
outputs = llm.generate(prompts, sampling_params)
# Print the outputs
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
The specific error is:
```python
IndexError: tuple index out of range
```
This happens during the call to torch.cuda.default_generators[i], which leads to the failure of the manual_seed function.
**Error Traceback:**
[rank0]: Traceback (most recent call last):
[rank0]: File "/data/offline_4.py", line 19, in <module>
[rank0]: llm = LLM(model="/data2/Llama-2-70b-hf", dtype="float16", tensor_parallel_size=4, enforce_eager=True, trust_remote_code=True)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 1051, in inner
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/llm.py", line 247, in __init__
[rank0]: self.llm_engine = self.engine_class.from_engine_args(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 484, in from_engine_args
[rank0]: engine = cls(
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 273, in __init__
[rank0]: self.model_executor = executor_class(vllm_config=vllm_config, )
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 262, in __init__
[rank0]: super().__init__(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 51, in __init__
[rank0]: self._init_executor()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/mp_distributed_executor.py", line 124, in _init_executor
[rank0]: self._run_workers("init_device")
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers
[rank0]: driver_worker_output = run_method(self.driver_worker, sent_method,
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 2220, in run_method
[rank0]: return func(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 170, in init_device
[rank0]: set_random_seed(self.model_config.seed)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/utils.py", line 10, in set_random_seed
[rank0]: current_platform.seed_everything(seed)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/platforms/interface.py", line 224, in seed_everything
[rank0]: torch.manual_seed(seed)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_compile.py", line 31, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 600, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/random.py", line 46, in manual_seed
[rank0]: torch.cuda.manual_seed_all(seed)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/cuda/random.py", line 131, in manual_seed_all
[rank0]: _lazy_call(cb, seed_all=True)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/cuda/__init__.py", line 244, in _lazy_call
[rank0]: callable()
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/cuda/random.py", line 128, in cb
[rank0]: default_generator = torch.cuda.default_generators[i]
[rank0]: IndexError: tuple index out of range
### Versions
PyTorch version: 2.5.0a0+872d972e41.nv24.08
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 535.183.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 144
On-line CPU(s) list: 0-143
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8374C CPU @ 2.70GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 36
Socket(s): 2
Stepping: 6
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3.4 MiB (72 instances)
L1i cache: 2.3 MiB (72 instances)
L2 cache: 90 MiB (72 instances)
L3 cache: 108 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-35,72-107
NUMA node1 CPU(s): 36-71,108-143
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cudnn-frontend==1.5.2
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] optree==0.12.1
[pip3] pynvjitlink==0.2.3
[pip3] pytorch-triton==3.0.0+dedb7bdf3
[pip3] torch==2.5.0a0+872d972e41.nv24.8
[pip3] torch_tensorrt==2.5.0a0
[pip3] torchvision==0.20.0a0
| true
|
2,877,836,094
|
[Triton upstream] [ROCm]: `RuntimeError: Triton Error [HIP]: Code: 209, Messsage: no kernel image is available for execution on the device`
|
jataylo
|
closed
|
[
"module: rocm"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
Testing on latest torch/triton, running into the following failures:
```
test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCUDA::test_comprehensive_mvlgamma_mvlgamma_p_5_cuda_float16
test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCUDA::test_comprehensive_mvlgamma_mvlgamma_p_5_cuda_float32
```
```
torch._inductor.exc.InductorError: RuntimeError: Triton Error [HIP]: Code: 209, Messsage: no kernel image is available for execution on the device
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_mvlgamma_mvlgamma_p_5_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
### Versions
PyTorch/Triton TOT
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd
| true
|
2,877,765,610
|
[Dynamo] Fix `is_compile_supported()` when `device_type` contains device index
|
shink
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 20
|
CONTRIBUTOR
|
Fixes #147826
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,877,600,681
|
Fix recent regression in evaluate_expr that effect cache lookups
|
laithsakka
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147836
PR https://github.com/pytorch/pytorch/pull/146939/ added an argument for evaluate_expr for the purpose of logging.
This caused a regression that we thought is due to calling id on symnode.
I digged deeper and found that adding that argument although does not effect results of evaluate_expr it mess the cache
lookups.
I refactored the code to avoid using expr_sym_node_id in the cache lookup, I also introduced evaluate_sym_node to and simplified the calls to evaluate_expr
#suppress-bc-linter
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,877,594,634
|
[Intel GPU] Add synchronize() in torch.utils.benchmark
|
DDEle
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 10
|
CONTRIBUTOR
|
When following https://pytorch.org/tutorials/recipes/recipes/benchmark.html on XPU, I notice that the device it is not synchronized in the benchmark. This PR tries to fix this and align the behavior with CUDA.
| true
|
2,877,587,777
|
[test][do not merge] Upgrade oneDNN to v3.7(6)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,877,586,156
|
[test][do not merge] Upgrade oneDNN to v3.7 (5)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,877,584,316
|
[test][do not merge] Upgrade oneDNN to v3.7 (4)
|
yanbing-j
|
closed
|
[
"module: mkldnn",
"open source",
"module: arm",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 @voznesenskym @penguinwu @EikanWang @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.