id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,761,200,623
|
[EZ] Update jinja2 to 3.1.5
|
malfet
|
closed
|
[
"better-engineering",
"Merged",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
To make Dependabot happy about https://cwe.mitre.org/data/definitions/150.html
| true
|
2,761,195,654
|
Update scheduler.py
|
malfet
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143922
* #143921
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,761,195,618
|
Add mps to GPU_TYPES
|
malfet
|
closed
|
[
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143922
* __->__ #143921
Because it is a GPU
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,761,174,598
|
[ReduceOps] Add dimension checking for cummin()/cummax().
|
dcci
|
closed
|
[
"Merged",
"module: reductions",
"ciflow/trunk",
"release notes: linalg_frontend"
] | 6
|
MEMBER
|
Summary: cum{min,max} didn't guard against 0-d vector and allowed an arbitrary dimension to be passed.
Test Plan: torch_test.py
Reviewers:
Subscribers:
Tasks:
Tags:
Fixes #71477
| true
|
2,761,154,042
|
remove allow-untyped-defs from ao/nn/qat/dynamic/modules/linear.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: AO frontend"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143919
| true
|
2,761,154,002
|
remove allow-untyped-defs from utils/tensorboard/_convert_np.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143918
| true
|
2,761,153,957
|
remove allow-untyped-defs from distributed/elastic/multiprocessing/subprocess_handler/handlers.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (torchelastic)"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143917
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,761,153,865
|
remove allow-untyped-defs from _inductor/codegen/aoti_hipify_utils.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143916
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,761,153,809
|
remove allow-untyped-defs from distributed/pipelining/_unflatten.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143916
* __->__ #143915
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,761,136,144
|
RuntimeError: could not create an engine
|
xyang2013
|
closed
|
[
"module: windows",
"triaged",
"module: xpu"
] | 25
|
NONE
|
### 🐛 Describe the bug
Hi, I experienced the following error (the message before the exception):
File c:\Users\xiaoy\anaconda3\envs\llm2\Lib\site-packages\torch\nn\modules\linear.py:125, in Linear.forward(self, input)
124 def forward(self, input: Tensor) -> Tensor:
--> 125 return F.linear(input, self.weight, self.bias)
RuntimeError: could not create an engine
The code is running fine if I set the device to 'cpu'. But when I set it to 'xpu', I got the above error.
GPU: Intel ARC B580 (with the latest driver)
OS: Windows 11
Conda/Python: 3.12
PyTorch instance:
pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/xpu
### Versions
PyTorch version: 2.6.0.dev20241222+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro (10.0.26100 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:48:34) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.26100-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Core(TM) i7-14700K
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3400
MaxClockSpeed: 3400
L2CacheSize: 28672
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.6.0.dev20241222+xpu
[conda] numpy 2.2.1 pypi_0 pypi
[conda] torch 2.6.0.dev20241222+xpu pypi_0 pypi
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,761,126,573
|
Set up Mac builds with clang >= 17 even though Xcode only has at most clang 16
|
swolchok
|
open
|
[
"module: binaries",
"module: ci",
"triaged",
"enhancement"
] | 4
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
This would enable a couple disparate improvements:
1) Our binary releases should include the latest compiler features and optimizations. The concrete motivating example is that the compiler used for Mac wheels apparently doesn't pass [`COMPILER_SUPPORTS_BF16_TARGET`](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cpu/ReducedPrecisionFloatGemvFastPathKernel.cpp#L161) (i.e., clang version greater than 15), which causes a slower bfloat16 gemv kernel to be used.
2) We should have test coverage for CPU bfloat16 support on Mac (#142703) -- clang 16 purports to be able to build it, but is buggy and we actually need 17+.
### Alternatives
do nothing until Apple gets around to releasing an Xcode with clang 17 or later and we get around to updating to it.
### Additional context
Xcode clang version history: https://gist.github.com/yamaya/2924292 . Latest at time of writing is Xcode 16.2 with `Apple clang version 16.0.0 (clang-1600.0.26.6)`
cc @seemethere @malfet @osalpekar @atalman @pytorch/pytorch-dev-infra
| true
|
2,761,088,011
|
Fix always true scaled_mm test
|
dnikolaev-amd
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: float8",
"ciflow/rocm"
] | 17
|
CONTRIBUTOR
|
Looks like `out_fp8` should use matmul without scales and `out_fp8_s` with
Scales were optional arguments before PR https://github.com/pytorch/pytorch/pull/128683
Then test_float8_scale started comparing two identical results and lost its meaning
Reason of making scales required https://github.com/pytorch/pytorch/pull/128683#issuecomment-2169146402UMBER
This PR uses scale=1.0 to compare result with scaled matmul
cc @yanbing-j @vkuzo @albanD @kadeng @penguinwu
| true
|
2,761,034,531
|
Add `_benchmark_func` convenience method
|
malfet
|
closed
|
[
"Stale",
"release notes: benchmark",
"topic: improvements",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Which could be used to benchmark simple ops with just one line of code, for example:
```shell
% python -c "import torch;print(torch.testing._benchmark_func(torch.add, (1024, 1024), device='mps', dtype=torch.int32))"
<torch.utils.benchmark.utils.common.Measurement object at 0x1081dee40>
f(*args);torch.mps.synchronize()
setup: args = [torch.testing.make_tensor(s, dtype=torch.int32, device='mps') for s in (1024, 1024)]
Median: 145.63 us
IQR: 21.00 us (130.33 to 151.33)
1397 measurements, 1 runs per measurement, 1 thread
WARNING: Interquartile range is 14.4% of the median measurement.
This could indicate system fluctuation.
```
| true
|
2,761,029,625
|
The link for the source in page torch.Tensor.backward is broken.
|
qqwqqw689
|
closed
|
[
"module: docs",
"module: autograd",
"triaged",
"needs design"
] | 3
|
NONE
|
### 📚 The doc issue
The link for the source in page torch.Tensor.backward is broken.[link](https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html)
### Suggest a potential alternative/fix
_No response_
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
| true
|
2,761,016,695
|
cpp_wrapper: Move #includes to per-device header files
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ci-no-td"
] | 17
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144349
* #144293
* #144002
* __->__ #143909
This prepares us for the next PR in the stack, where we introduce pre-compiled per-device header files to save compilation time.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Differential Revision: [D67938955](https://our.internmc.facebook.com/intern/diff/D67938955)
| true
|
2,760,977,421
|
[EZ] Update sympy to 1.13.3
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
And remove python>=3.9 check as it currently covers all supported python versions
Fixes https://github.com/pytorch/pytorch/issues/143907
| true
|
2,760,882,282
|
update `sympy` version in `requirement.txt`
|
evan0greenup
|
closed
|
[
"triage review",
"module: build",
"module: ci"
] | 1
|
NONE
|
### 🐛 Describe the bug
Now, sympy version is `1.13.3`, but `torch` is hard required `sympy` version to be `1.13.1`, it will cause inconvenient in an environment which require `sympy` to be latest.
### Versions
<https://github.com/pytorch/pytorch/blob/a20765a9c1e578beb5e53f9a3ef0c13ea6839768/requirements.txt#L19>
cc @malfet @seemethere @pytorch/pytorch-dev-infra @chauhang @penguinwu
| true
|
2,760,761,266
|
How to correctly asynchronously copy a GPU tensor to a CPU tensor in another process without introducing blocking?
|
zhanghb55
|
open
|
[
"needs reproduction",
"oncall: distributed",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
I am developing a distributed PyTorch application designed to asynchronously transfer data from a GPU process to a CPU process, ensuring that GPU computations remain non-blocking. In my current implementation, I utilize the non-blocking copy_ method to transfer data from a GPU tensor to a CPU tensor and then employ dist.isend to send the data to another rank. However, under certain conditions, this setup leads to a deadlock.
```python
import torch
import torch.distributed as dist
import os
def gpu_to_cpu_and_send(rank, size):
tensor = torch.randn(4096, 8192).cuda(rank) # On specific GPU
print(tensor[-1][-1])
print(f"Rank {rank}: Created tensor on GPU")
cpu_tensor = torch.zeros(4096, 8192)
cpu_tensor.copy_(tensor, non_blocking=True) # Non-blocking GPU to CPU copy
print(f"Rank {rank}: Copied tensor to CPU (non-blocking)")
if rank == 0:
print(f"Rank {rank}: Sending tensor to rank 1")
dist.isend(tensor=cpu_tensor, dst=1) # Sending data to rank 1
print(f"Rank {rank}: Data sent to rank 1")
def receive_data(rank, size):
received_tensor = torch.zeros(4096, 8192)
print(f"Rank {rank}: Waiting to receive data")
dist.recv(tensor=received_tensor, src=0) # Receiving data from rank 0
print(f"Rank {rank}: Received data from rank 0")
print(received_tensor[-1][-1])
def main():
rank = int(os.environ['RANK'])
size = int(os.environ['WORLD_SIZE'])
dist.init_process_group(backend='gloo', rank=rank, world_size=size)
if rank == 0:
gpu_to_cpu_and_send(rank, size)
elif rank == 1:
receive_data(rank, size)
if __name__ == "__main__":
main()
```
### Versions
torchrun --nproc_per_node=2 demo.py
Run with Nvidia GPU.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,760,730,501
|
The special size tensor containing batches has a difference of a few tens of thousands in calculation results between CPU and GPU
|
fine2copyV
|
open
|
[
"needs reproduction",
"module: cuda",
"triaged"
] | 4
|
NONE
|
### 🐛 Describe the bug
You can modify the comments to switch and run to view the changes in the results!
```
import torch.nn as nn
import torch.nn.functional as F
import torch
BN_MOMENTUM = 0.1
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ConvInGPUError(nn.Module):
def __init__(self):
super(ConvInGPUError, self).__init__()
self.block = BasicBlock
self.inplanes = 256
self.layer = self._make_layer(self.block, 256, 2, stride=2)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(planes * block.expansion, momentum=BN_MOMENTUM),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
out = self.layer(x.clone()) # B = 2
out_1 = self.layer(x.clone()[:1])
print(f'in forward max diff: torch.max(out - out_1): {torch.max(out[:1] - out_1[:1]):.20f}')
print(f'in forward min diff: torch.min(out - out_1): {torch.min(out[:1] - out_1[:1]):.20f}')
print(f'in forward mean diff: torch.mean(out - out_1): {torch.mean(out[:1] - out_1[:1]):.20f}')
return x
if __name__ == "__main__":
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.use_deterministic_algorithms(True)
torch.manual_seed(42)
model = ConvInGPUError()
model.eval()
# you can compare the result of gpu and cpu by using the same input, but the result is different
# gpu
model.cuda()
input_data = torch.normal(0, 1, size=(2, 256, 48, 96)).cuda()
# cpu
# input_data = torch.normal(0, 1, size=(2, 256, 48, 96))
with torch.no_grad():
output = model(input_data)
print(output.shape)
"""
# input_data = torch.randn(2, 256, 96, 192)
# input_data = torch.normal(0,1,(2, 256, 48, 160))
# input_data = torch.normal(0,1,(2, 256, 40, 160))
"""
```
### Versions

The results on GPU and CPU are inconsistent.
1. The current batch input is 2, and the complete inference yields A.
2. Slice first and then infer to obtain B. The difference obtained by subtracting A and B is not completely zero.
date:20241227
cc @ptrblck @msaroufim @eqy
| true
|
2,760,710,227
|
[Inductor][CPP] Remove redundant Buffers after Grouped GEMM Fusion
|
leslie-fang-intel
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143904
* #143897
* #143796
**Summary**
In this PR, we remove the extra kernel arguments and the extra buffers allocation when any `MultiOutput Buffer` is consumed by an out-template epilogue. If any `MultiOutput Buffer` is consumed by an out-template epilogue, the `Grouped GEMM Template` should bypass storing it in the `MultiOutput Buffer` and instead write it directly to the corresponding out-template epilogue.
**Remove extra kernel arguments**
For the case listed above, a `MultiOutput Buffer` shouldn't exist in the Kernel's args if it's consumed by an out-template epilogue. We mark this `MultiOutput Buffer` as `REMOVED` for this case.
**Remove the extra buffers allocation**
For the case listed above, a `MultiOutput Buffer` shouldn't be allocated. We introduce the `outputs_removed` attribute in the `CppTemplateBuffer`. This attribute tracks `MultiOutput Buffers` that are directly used by out-template epilogues. During code generation, if a `MultiOutput Buffer` is listed in `outputs_removed`, its buffer allocation line is omitted to prevent unnecessary memory usage.
**Test Plan**
```
python -u -m pytest -s -v test/inductor/test_cpu_select_algorithm.py -k test_grouped_linear_epilogue
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,760,678,480
|
[Quant][Inductor][X86] Separate unary post op fusion and lowering for qlinear
|
Xia-Weiwen
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"module: inductor",
"ciflow/inductor"
] | 8
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144318
* #144312
* #144224
* __->__ #143903
**Summary**
The current implementation fuses quantized ops and their post ops and lowers the fused the op to cpp backend in the same pass. It is better to separate post op fusion and lowering because
- it looks better in terms of design
- we need the post op fusion pass for PT2E quantization eager mode
This PR is the first of a series of PRs which separate post op fusion and lowering for quantized linear and convolution. It moves unary post op fusion of qlinear out of the lowering pass.
This PR moves the fusion pass from the lowering pass to after the weight-prepack pass. The workflow is
1. Weight prepack for qlinear so that `dq - linear` patterns are replaced by `onednn.qlinear_pointwise`
2. Fuse `onednn.qlinear_pointwise` and post ops
3. Lower to cpp backend
This PR adds additional `PatternMatcherPass`'s to handle the post op fusion. Pattern matchers used for fusion are reused.
**Test plan**
It is covered by existing UTs in `test_mkldnn_pattern_matcher.py` for post op fusion.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,760,655,983
|
[Easy] add quotes to shell activation commands
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143262
* __->__ #143902
| true
|
2,760,622,674
|
gpu, matmul, shape is bad, the debug quits and I got no way to hold it.
|
YagaoDirac
|
closed
|
[
"needs reproduction",
"module: cuda",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
python312 pytorch2.5.1+cu124
win11, vs code.
gtx1660
inside a customized autograd.function.
very small model.
I messed with the shape, and the matmul throwed. I started to check everything as usual, but the process quits like 20seconds after it throwed.
Then I move the entire task to cpu, everything worked.
The exception is accute. The only problem is it quits automatically.
I know the gpu is not very good at reporting exceptions but it never ends the process itself.
That's all the report. Thank you for reading. Simply close this issue after reading.
🎉🎉🎉
### Versions
python312 pytorch2.5.1+cu124
win11, vs code.
gtx1660
inside a customized autograd.function.
very small model.
cc @ptrblck @msaroufim @eqy
| true
|
2,760,618,934
|
FSDP mixed precision ignores buffer_dtype
|
GLivshits
|
closed
|
[
"oncall: distributed",
"module: fsdp"
] | 1
|
NONE
|
### 🐛 Describe the bug
Hello. I found out that buffers in FSDP are not casted to requested dtype, and code breaks. User is forced to cast buffers each time in forward.
Piece of error:
```
File "/home/user/regbuf_compile_debug.py", line 44, in forward
return nn.functional.conv2d(x, self.kernel, groups=self.kernel.shape[0], stride=2, padding=self.padding)
RuntimeError: expected scalar type Half but found Float
```
Repro script:
```
import argparse
import os
from contextlib import nullcontext
from typing import Tuple
import torch
import torch.distributed as dist
import torch.nn.functional as F
from torch import nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision, ShardingStrategy
from torch.distributed.fsdp.sharded_grad_scaler import ShardedGradScaler
from torch.distributed.fsdp.wrap import ModuleWrapPolicy
from tqdm.auto import tqdm
torch._dynamo.config.inline_inbuilt_nn_modules = False
torch._dynamo.config.optimize_ddp = False
def setup(rank, world_size):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
class NonLearnableConv(nn.Module):
def __init__(self, kernel: Tuple[int], in_channels: int):
super().__init__()
self.padding = (len(kernel) - 1) // 2
kernel = torch.tensor(kernel, dtype=torch.float32)
kernel = kernel / kernel.sum()
kernel = kernel.outer(kernel)[None, None].repeat(in_channels, 1, 1, 1)
self.register_buffer("kernel", kernel)
def forward(self, x: torch.Tensor) -> torch.Tensor:
print(x.dtype, self.kernel.dtype)
return nn.functional.conv2d(x, self.kernel, groups=self.kernel.shape[0], stride=2, padding=self.padding)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--batch_size", type=int, default=32)
parser.add_argument("--grad_accum_steps", type=int, default=1)
parser.add_argument("--num_iterations", type=int, default=200)
parser.add_argument("--use_fsdp", action="store_true")
parser.add_argument("--use_compile", action="store_true")
args = parser.parse_args()
return args
def main(rank, world_size, args):
setup(rank, world_size)
torch.cuda.set_device(rank)
device = torch.device(f"cuda:{rank}")
dtype = torch.float16
model = nn.Sequential(
nn.Sequential(nn.Conv2d(3, 64, 3, padding=1)),
nn.Sequential(NonLearnableConv((1, 2, 2, 1), 64)),
nn.Sequential(nn.Conv2d(64, 3, 3, padding=1)),
nn.Sequential(NonLearnableConv((1, 2, 2, 1), 3)),
).to(device)
if args.use_fsdp:
model = FSDP(
module=model,
device_id=rank,
use_orig_params=args.use_compile,
sharding_strategy=ShardingStrategy.HYBRID_SHARD,
forward_prefetch=True,
limit_all_gathers=True,
auto_wrap_policy=ModuleWrapPolicy({nn.Sequential}),
mixed_precision=MixedPrecision(
param_dtype=dtype,
buffer_dtype=dtype,
reduce_dtype=dtype,
),
)
loss_amp_context = torch.amp.autocast("cuda", dtype=dtype, enabled=True)
model_amp_context = nullcontext()
scaler = ShardedGradScaler(enabled=dtype is torch.float16)
else:
loss_amp_context = torch.amp.autocast("cuda", dtype=dtype, enabled=True)
model_amp_context = loss_amp_context
scaler = torch.amp.GradScaler("cuda", enabled=dtype is torch.float16)
if args.use_compile:
print("Trying compile.")
model.compile(mode="default", dynamic=False)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5, betas=(0.9, 0.98))
iterator = range(args.num_iterations)
if rank == 0:
iterator = tqdm(iterator, total=args.num_iterations, miniters=10)
for _ in iterator:
for _ in range(args.grad_accum_steps):
x = torch.randn(args.batch_size, 3, 128, 128, device=device)
with model_amp_context:
out = model(x)
with loss_amp_context:
loss = out.mean() / args.grad_accum_steps
loss_test = loss.clone() # Ensure local loss is not changed by allreduce
torch.distributed.all_reduce(loss_test) # Check if any gpu has NaN loss
if torch.isnan(loss_test):
raise ValueError("NaN loss.")
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
cleanup()
if __name__ == "__main__":
args = parse_args()
world_size = torch.cuda.device_count()
torch.multiprocessing.freeze_support()
if world_size == 1:
main(0, world_size, args)
else:
torch.multiprocessing.spawn(fn=main, args=(world_size, args), nprocs=world_size, join=True)
```
Successful launch without FSDP:
`python regbuf_compile_debug.py`
To break:
`python regbuf_compile_debug.py --use_fsdp`
This error propagates in real setup on UNet model (buffers are casted to input time in forward) with attention blocks when using FSDP, compile and gradient accumulation, dtype errors appear + sometimes _dynamo does not even find registered buffer and fails with AttributeError.
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.210-39.1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 550.54.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 120
On-line CPU(s) list: 0-119
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7662 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 120
Stepping: 0
BogoMIPS: 3992.37
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities
Virtualization: AMD-V
L1d cache: 7.5 MiB (120 instances)
L1i cache: 7.5 MiB (120 instances)
L2 cache: 60 MiB (120 instances)
L3 cache: 1.9 GiB (120 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-29
NUMA node1 CPU(s): 30-59
NUMA node2 CPU(s): 60-89
NUMA node3 CPU(s): 90-119
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==5.0.4
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] open-clip-torch==2.24.0
[pip3] pytorch-warmup==0.1.1
[pip3] torch==2.5.1
[pip3] torch-fidelity==0.3.0
[pip3] torch-model-archiver==0.11.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torch-workflow-archiver==0.2.14
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.7.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchsde==0.2.6
[pip3] torchserve==0.11.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] lovely-numpy 0.2.13 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] open-clip-torch 2.24.0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-model-archiver 0.11.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torch-workflow-archiver 0.2.14 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchmetrics 1.4.0.post0 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchserve 0.11.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,760,616,632
|
Fix boundary conditions for hardswish backward
|
CaoE
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: not user facing"
] | 7
|
COLLABORATOR
|
Fixes #136345.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,760,613,728
|
pytorch2.5.1的版本支持这个算子了吗:aclnnFusedInferAttentionScoreV2
|
ZWQ2-A11Y
|
closed
|
[
"triage review",
"module: PrivateUse1"
] | 3
|
NONE
|
pytorch2.5.1的版本支持这个算子了吗:aclnnFusedInferAttentionScoreV2
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens
| true
|
2,760,568,736
|
[Inductor][CPP] Enable Epilogue Fusion for Grouped GEMM Template
|
leslie-fang-intel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143897
* #143796
**Summary**
In this PR, we enable the epilogues fusion and code generation for Grouped GEMM. Here are the high-level description of how we implement it.
**Fusion**
- The Grouped GEMM Template produces a `Template Buffer` with a `MultiOutputLayout` and a set of `MultiOutput Buffers`, where each buffer corresponds to a specific GEMM.
- During the initial round of fusion, the `Template Buffer` and all associated `MultiOutput Buffers` are fused into a `FusedSchedulerNode` by extending the existing fusion design.
- In subsequent fusion rounds, this `FusedSchedulerNode` can further fuse with its epilogues, following the original fusion design principles.
**Code Gen**
We maintain a list of epilogues and codegen it one by one.
- If any of the GEMM has bias, we create a extra `bias_add` epilogue and prepend it at first of the epilogue list.
- If any of the GEMM has no epilogue, we create a `to_bf16` copy epilogue and append it at last of the epilogue list.
**TestPlan**
```
python -u -m pytest -s -v test/inductor/test_cpu_select_algorithm.py -k test_grouped_linear_epilogue
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,760,465,533
|
Using acc_t for log_softmax
|
yanbing-j
|
open
|
[
"module: cpu",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
COLLABORATOR
|
This PR is to fix https://github.com/pytorch/pytorch/issues/140222. Using high precision as the accumulate type for log_softmax forward. Reproducer in the issue can pass now.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143896
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,760,441,591
|
When using torch.compile to compile the function _kernel_make_viewless_tensor, an error occurs:AssertionError: wrong number of dimensions
|
FY-Summer
|
closed
|
[
"triaged",
"oncall: pt2",
"module: inductor"
] | 3
|
NONE
|
### 🐛 Describe the bug
test device: NVidia L20
software version:
torch 2.5.1
torchaudio 2.5.1
torchvision 0.20.1
triton 3.1.0
The test codes are as follows.
I’m sure it’s related to the parameter “requires_grad” of the function ”_kernel_make_viewless_tensor“, because changing it to False allows the codes to pass, and the graph generated by torch.compile will be different.
```
# The codes are sourced from https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/core/utils.py:183
def _kernel_make_viewless_tensor(inp, requires_grad):
"""Make a viewless tensor.
View tensors have the undesirable side-affect of retaining a reference
to the originally-viewed tensor, even after manually setting the '.data'
field. This method creates a new tensor that links to the old tensor's
data, without linking the viewed tensor, referenced via the '._base'
field.
"""
out = torch.empty((1,), dtype=inp.dtype, device=inp.device, requires_grad=requires_grad)
out.data = inp.data
return out
t1 = torch.randn(20, 50, 30, dtype=torch.bfloat16).to('cuda')
c = torch.compile(_kernel_make_viewless_tensor)
t2 = _kernel_make_viewless_tensor(t1, True)
t3 = c(t1, True)
print(f"allclose result = {torch.allclose(t2, t3, atol=1e-5, rtol=1e-5)}")
```
The test results are as follows:
```
Traceback (most recent call last):
File "/data/test/b.py", line 20, in <module>
t3 = c(t1, True)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/data/test/b.py", line 12, in _kernel_make_viewless_tensor
out = torch.empty((1,), dtype=inp.dtype, device=inp.device, requires_grad=requires_grad)
File "/data/test/b.py", line 12, in torch_dynamo_resume_in__kernel_make_viewless_tensor_at_12
out = torch.empty((1,), dtype=inp.dtype, device=inp.device, requires_grad=requires_grad)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1100, in forward
return compiled_fn(full_args)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 321, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 667, in inner_fn
outs = compiled_fn(args)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 488, in wrapper
return compiled_fn(runtime_args)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/codecache.py", line 1478, in __call__
return self.current_callable(inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/utils.py", line 1977, in run
return model(new_inputs)
File "/tmp/torchinductor_root/ld/cldpqwvjtbpm3peqlchlnst5etn44gyzsepxnck35bn7pm4epvvj.py", line 35, in call
assert_size_stride(arg0_1, (20, 50, 30), (1500, 30, 1))
AssertionError: wrong number of dimensions
# /tmp/torchinductor_root/ld/cldpqwvjtbpm3peqlchlnst5etn44gyzsepxnck35bn7pm4epvvj.py
# AOT ID: ['0_inference']
from ctypes import c_void_p, c_long, c_int
import torch
import math
import random
import os
import tempfile
from math import inf, nan
from torch._inductor.hooks import run_intermediate_hooks
from torch._inductor.utils import maybe_profile
from torch._inductor.codegen.memory_planning import _align as align
from torch import device, empty_strided
from torch._inductor.async_compile import AsyncCompile
from torch._inductor.select_algorithm import extern_kernels
from torch._inductor.codegen.multi_kernel import MultiKernelCall
aten = torch.ops.aten
inductor_ops = torch.ops.inductor
_quantized = torch.ops._quantized
assert_size_stride = torch._C._dynamo.guards.assert_size_stride
empty_strided_cpu = torch._C._dynamo.guards._empty_strided_cpu
empty_strided_cuda = torch._C._dynamo.guards._empty_strided_cuda
empty_strided_xpu = torch._C._dynamo.guards._empty_strided_xpu
reinterpret_tensor = torch._C._dynamo.guards._reinterpret_tensor
alloc_from_pool = torch.ops.inductor._alloc_from_pool
async_compile = AsyncCompile()
async_compile.wait(globals())
del async_compile
def call(args):
arg0_1, arg1_1 = args
args.clear()
assert_size_stride(arg0_1, (20, 50, 30), (1500, 30, 1))
assert_size_stride(arg1_1, (20, 50, 30), (1500, 30, 1))
with torch.cuda._DeviceGuard(0):
torch.cuda.set_device(0)
# Topologically Sorted Source Nodes: [], Original ATen: []
buf0 = torch.ops.aten.set_.source_Tensor(arg0_1, arg1_1)
assert_size_stride(buf0, (20, 50, 30), (1500, 30, 1))
del arg0_1
del arg1_1
return (buf0, )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
arg0_1 = rand_strided((20, 50, 30), (1500, 30, 1), device='cuda:0', dtype=torch.bfloat16)
arg1_1 = rand_strided((20, 50, 30), (1500, 30, 1), device='cuda:0', dtype=torch.bfloat16)
fn = lambda: call([arg0_1, arg1_1])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-4.18.0-372.9.1.el8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L20
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.3.0
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] optree==0.11.0
[pip3] pynvjitlink==0.1.13
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==3.0.0+989adb9a2
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformer-engine-torch==1.9.0
[pip3] triton==3.1.0
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,760,435,965
|
Fix fft jit ops cpu
|
ZhaoqiongZ
|
closed
|
[
"triaged",
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Fixes #142484
| true
|
2,760,354,401
|
[Inductor] Implement primitive Metal compiler
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143893
* #143892
Still work in progress, only works for element wise operations. Current implementation could be used to turn something like
```python
def f(x):
return x[:,::2].sin() + x[:, 1::2].cos()
```
into the following shader
```python
# Topologically Sorted Source Nodes: [sin, cos, add], Original ATen: [aten.sin, aten.cos, aten.add]
# Source node to ATen node mapping:
# add => add
# cos => cos
# sin => sin
# Graph fragment:
# %sin : [num_users=1] = call_function[target=torch.ops.aten.sin.default](args = (%slice_2,), kwargs = {})
# %cos : [num_users=1] = call_function[target=torch.ops.aten.cos.default](args = (%slice_4,), kwargs = {})
# %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%sin, %cos), kwargs = {})
mps_lib = torch.mps._compile_shader("""
kernel void kernel_0(
device float* out_ptr0,
constant float* in_ptr0,
uint xindex [[thread_position_in_grid]]
) {
int x0 = xindex;
auto tmp0 = in_ptr0[2*x0];
auto tmp1 = metal::precise::sin(tmp0);
auto tmp2 = in_ptr0[2*x0 + 1];
auto tmp3 = metal::precise::cos(tmp2);
auto tmp4 = tmp1 + tmp3;
out_ptr0[x0] = static_cast<float>(tmp4);
}
""")
```
Please note, that `torch.compile` in 2.7 is an early prototype and one should wait with migration until 2.8 is out, see progress tracker here: https://github.com/pytorch/pytorch/issues/150121
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,760,354,375
|
[Inductor] Add MPS device op overrides
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143893
* __->__ #143892
Mostly dummy interface as MPS backend currently limited to a single device
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,760,354,344
|
[Dynamo] Add MPSDevice interface
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143893
* #143892
* __->__ #143891
That simply checks if device is available and whether or not it supports bf16
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,760,342,091
|
TORCH_NCCL_ENABLE_TIMING break nccl/matmul overlapping
|
cos120
|
closed
|
[
"oncall: distributed",
"module: nccl"
] | 22
|
NONE
|
### 🐛 Describe the bug
I am using Megatron-LM for training, I found that if I set `TORCH_NCCL_ENABLE_TIMING=1`, all overlaping kernels in Megatron-LM will not overlapped, including dw/dx backward in layer norm and zero1 reduce scatter/allgather not overlapping with matmul.
I have submit a issue to `TransformerEngine`
https://github.com/NVIDIA/TransformerEngine/issues/1353, maybe it relates to `CUDA_DEVICE_MAX_CONNECTIONS=1`
### Versions
I am using 4 A100-SXM4 with pytorch2.4.0+cu124 and mcore0.9.0 with transformer engine(0.11.0+fc03478)
### update1
I use fsdp1 with 8 A100-SXM4 with pytorch2.4.0+cu124, and I tracing 3 configurations
- TORCH_NCCL_ENABLE_TIMING=1
not break overlapping of NCCL and matmul
- CUDA_DEVICE_MAX_CONNECTIONS=1
break reduce scatter in `FullyShardedDataParallel._post_backward_hook`

- CUDA_DEVICE_MAX_CONNECTIONS=1 TORCH_NCCL_ENABLE_TIMING=1
break all overlap, including allgather in forward


I upload timeline and reproduce code, just set different env and run
```bash
CUDA_DEVICE_MAX_CONNECTIONS=1 TORCH_NCCL_ENABLE_TIMING=1 python -m torch.distributed.run --master-addr localhost --master-port 5555 --nnodes 1 --nproc-per-node 8 --node-rank 0
```
[timeline.tar.gz](https://github.com/user-attachments/files/18326850/timeline.tar.gz)
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,760,310,498
|
The in-place version of unsqueezed is not supported by TorchDynamo when used in a specific way
|
meetmul
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
NONE
|
### 🐛 Describe the bug
If I directly call `torch.Tensor.unsqueeze_(x,y)` in my function, torch.compile fails with InternalTorchDynamoError. However, if I change the code to `x.unsqueeze_(y) format, torch.compile works.
code:
```python
import torch
@torch.compile
def f1(x, y):
return x.unsqueeze(y)
@torch.compile
def f2(x, y):
return torch.Tensor.unsqueeze_(x, y)
x = torch.tensor([1, 2, 3, 4])
y = 0
print(f1(x, y))
print(f2(x, y))
```
When running `f2`, pytorch throws the following error:
```
torch._dynamo.exc.InternalTorchDynamoError: IndexError: list index out of range
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,760,287,213
|
[dynamo] Trace through overridden __getattribute__ method
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"keep-going"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143698
* __->__ #143888
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,760,268,096
|
`torch.accelerator` cross-device utilities and properties
|
stas00
|
open
|
[
"triaged",
"module: accelerator"
] | 2
|
CONTRIBUTOR
|
### 🚀 The feature, motivation and pitch
as suggested by @albanD [here](https://pytorch.slack.com/archives/C3PDTEV8E/p1735120754479929?thread_ts=1735017298.875249&cid=C3PDTEV8E) opening an issue to discuss which cross-device utilities and device property fields should pytorch support.
1. properties report at the moment is inconsistent
- `torch.cuda.get_device_properties` does work for CUDA and ROCm but not other accelerators, e.g. one needs to use `torch.hpu.get_device_properties` for Gaudi.
- depending on whether it's CUDA or ROCm - the fields it outputs aren't the same - so a programmer can't reliably write cross-device applications
- a lot of info is missing, e.g. to get CUDA cores count I have to use `nvidia-settings -q CUDACores -t` or `cuda.bindings.runtime.cudaGetDeviceProperties()` - need to depend on other libs/utils and again this is not cross-device (albeit one could argue that this is a cuda-specific setting, so there is no cross-device cuda-core count - not sure)
- @albanD mentioned that `torch.accelerator` API should be overcoming the above issues
2. then let's discuss which cross-device utils should be there.
### cache clearing
- one that I started the discussion on is cache clearing - this is important for benchmarking - currently various hacks are used to perform that e.g. see for example how a hardcoded 256MB tensor is used by triton's `do_bench` - [init](https://github.com/triton-lang/triton/blob/a2b398e0bb1b120f31cf386d6ae3261c3ab84207/third_party/nvidia/backend/driver.py#L555-L556), [clearing](https://github.com/triton-lang/triton/blob/6ad95ee4fd9b1e172717323460fd54c250dd7d65/python/triton/testing.py#L120-L127) - so that anybody using that either wastes compute on clearing more cache than there is or as accelerators get bigger 256MB will be not enough and the benchmark will return flawed results.
The other complication is that which cache are we clearing? In the NVIDIA world it's L1+L2, but for AMD it's L1+L2+AMD Infinity cache (Last Level Cache).
You will find the table of high end accelerator caches here https://github.com/stas00/ml-engineering/tree/master/compute/accelerator#caches - it's very inconsistent across accelerators - e.g. Intel Gaudi3 cache can be either L3 or L2 depending on the use case!
cc @albanD @guangyey @EikanWang
| true
|
2,760,254,098
|
[RFC] Identifying dynamic int8 symmetric vs asymmetric quantization of activation/input in Inductor-CPU
|
sanchitintel
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 0
|
COLLABORATOR
|
### 🚀 The feature, motivation and pitch
## Problem statement
If int8 asymmetric quantization is used, at Inductor compile time, the input used while invoking `torch.compile` might be such that the zero-points of activation for some quantized linear may _coincidentally_ be zero (per-tensor quantization) or all zeros (per-token quantization). In such a case, we might mistake this case to pertain to symmetric quantization.
Please suggest some solutions to this problem besides these two.
## Potential solution 1
One solution is to make zero-point optional for dequantizing an int8 tensor.
In torchao, it is possible to make some changes to ensure that int8 symmetric quantization would not have zero-points, so they wouldn’t be present in the Inductor graph. But similar changes would have to be made for PT2E quantization as well.
Nevertheless, if this change is made only in torchao, then we could still leverage this change with Inductor patterns corresponding to int8 symmetrically quantized activations that don't use zero-points for dequantization, but users who don't use torchao wouldn't benefit.
cc @chauhang @penguinwu @leslie-fang-intel @Xia-Weiwen
# Alternatives
## Potential solution 2
For per-tensor quantization, we could add a runtime check in Inductor codegened code that'd detect as to whether the int8 quantization-type of an activation is symmetric or asymmetric (by checking if zp is 0).
But this approach may not be as performant for per-channel quantization (would need to check if any zp value is non-zero).
#### This approach needs some new infra in Inductor-CPU codegen -
Support of two variants of epilogues, both of which are compiled, but only one of which is used at runtime depending upon some check. In this case, one variant would only applies activation & weight scales, while the second one would also compute compensation) – the decision to use one of them is to be made at runtime for the whole quantized linear.
### Additional context
We can compute int8 quantized linear with int8xint8 -> int32 GEMMs, so long as weights are not asymmetrically quantized.
If activations are asymmetrically quantized, we can apply compensation pertaining to zero-points of activation, after applying activation & weight scales.
If activations are symmetrically quantized, the computation is straightforward, and after int8 x int8 -> int32 GEMMs, we only need to apply pointwise activation & weight scales (which can happen at the block-level if we apply epilogues at micro-kernel level).
| true
|
2,760,246,016
|
restore 'unused' variable to fix test_cuda_device_memory_allocated
|
dnikolaev-amd
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
This PR fix `test_cuda_multigpu.py::TestCudaMultiGPU::test_cuda_device_memory_allocated`
by restoring a deleted 'unused' variable from commit https://github.com/pytorch/pytorch/commit/d8c8ba24404ef892d4d948eb095b69d90b9ba7e6
cc @jithunnair-amd @jeffdaily @pruthvistony
| true
|
2,760,218,546
|
[Inductor] Relax size constraints for re-inplacing
|
BoyuanFeng
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5
|
CONTRIBUTOR
|
Current reinplacing requires input buffer and output buffer has exactly the same storage size. However, matmul padding may increase the tensor size slightly for better performance, which prevents reinplacing.
This PR changes the size constraints to be:
- input and output buffer have exactly the same symbolic expression for storage size (i.e., sympy str).
- it's statically known that 0.99 * input_size <= output_size <= input_size
### Apply on llm.c
See the reuse of `buf1`.
Before relaxing size requirements on re-inplacing: ([P1703512078](https://www.internalfb.com/phabricator/paste/view/P1703512078))

After relaxing size requirements on re-inplacing: ([P1703513053](https://www.internalfb.com/phabricator/paste/view/P1703513053))

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,760,201,298
|
[dtensor] add src_data_rank to distribute_tensor API
|
wanchaol
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144005
* __->__ #143883
As titled, this PR add a kwarg src_data_rank to the distribute_tensor
API, to allow user specify a specific rank as the full tensor source
data. Previously we by default specify group_rank=0 as the source of
truth for single device semantic, this new option:
* gives advanced user flexiblity to choose the source data rank
* allow user to specify None explicity, which means we will skip the
communications needed (scatter/broadcast) for the cases that does not
care about single device semantic (i.e. loading from a checkpoint)
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,760,194,003
|
Add support for list, tuple and dict in numeric debugger
|
jerryzh168
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143882
Summary:
Previously numeric debugger only supports torch.Tensor, this PR adds support for list, tuple and dict as well
Test Plan:
python test/test_quantization.py -k test_extract_results_from_loggers_list_output
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: [D67660049](https://our.internmc.facebook.com/intern/diff/D67660049)
| true
|
2,760,172,691
|
remove allow-untyped-defs from _inductor/codegen/cpu_device_op_overrides.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143881
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,760,141,925
|
Add option to serialization config to reduce random reads from get_record_offset when loading with mmap=True
|
mikaylagawarecki
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: python_frontend",
"topic: improvements",
"ciflow/inductor",
"ci-no-td"
] | 13
|
CONTRIBUTOR
|
## Background
This PR adds `torch.utils.serialization.config.load.calculate_storage_offsets`. This option relies on the previous PR in this stack, where storage order was changed to non lexicographical. A `.format_version` entry was added to the zipfile and `calculate_storage_offsets` will only work on checkpoints with `.format_version`.
When this is turned on, for `torch.load(mmap=True)`, offsets of each storage record (other than the 0th storage will be calculated instead of relying on `miniz` APIs to determine this).
The existing APIs will issue multiple random reads (reading the end of central directory record, then reading the zipfile header for the record) to determine the storage offset where the record starts. This can greatly degrade `torch.load(mmap=True)` performance for non-filesystem cases.
https://github.com/pytorch/pytorch/blob/6aaae9d78f0992ac6265552e4f8323ef11d62bb0/caffe2/serialize/inline_container.cc#L589-L605
## How does this work
The format for the checkpoint is as such
```
archive_name/
|_ data.pkl
|_.format_version
|_byteorder
|_data/
|_ 0
|_ 1
|_ 2
|_ ...
|_
```
Each `data/i` record represents a storage, where storages are written in the order that the Pickler encounters them.
For each storage, our `persistent_load` logic saves the following metadata to the pickle file `dtype, numel, key, location` where `numel` is the number of bytes in the storage.
Note that we always use `miniz` writer in the zip64 mode per [here](https://github.com/pytorch/pytorch/blob/7796e308d0636bcbfb2490c80291edd440d4bc42/caffe2/serialize/inline_container.cc#L701) A zipfile record written by miniz looks as such
```
---------------- ----------------- ------------------- ---------------- --------- ------------------------------
| 30 byte header | n byte filename | zip64_extra_data | m byte padding | storage | 16 or 24 byte local dir footer |
---------------- ----------------- ------------------- ---------------- --------- ------------------------------
```
- The header size (30) is given by [`MZ_ZIP_LOCAL_DIR_HEADER_SIZE`](https://github.com/pytorch/pytorch/blob/main/third_party/miniz-3.0.2/miniz.c?fbclid=IwZXh0bgNhZW0CMTEAAR2O8Vysd--UoSCxW70gabXIS1dbz733oHwuUQ5_Ff1hY2WU6PL2i6CSH4A_aem_J9oaU2HpDeWtJKOU9EnVqw#L3290)
- filename will be `"{archive_name}/{filepath}"`
- `zip64_extra_data` is determined by [`mz_zip_writer_create_zip64_extra_data`](https://github.com/pytorch/pytorch/blob/7796e308d0636bcbfb2490c80291edd440d4bc42/third_party/miniz-3.0.2/miniz.c#L6202). Note that [we only create zip64_extra_data if storage_size >= 0xFFFFFFFF or the offset of the start of the header >= 0xFFFFFFFF](https://github.com/pytorch/pytorch/blob/7796e308d0636bcbfb2490c80291edd440d4bc42/third_party/miniz-3.0.2/miniz.c#L6519-L6524)
- `m` is determined by [`getPadding`](https://github.com/pytorch/pytorch/blob/7796e308d0636bcbfb2490c80291edd440d4bc42/caffe2/serialize/inline_container.cc#L254), which accounts for filename, zip64_extra_data to determine `m` such that the start of `storage` is aligned to 64 bytes. The `m` bytes will always start with `F B padding_size" as the first 4 bytes
- The local dir footer size is determined based on [this snippet ](https://github.com/pytorch/pytorch/blob/7796e308d0636bcbfb2490c80291edd440d4bc42/third_party/miniz-3.0.2/miniz.c#L6610-L6632): if the buffer size is 0 it is skipped. If the zip64_extra_data was created, it is 24, otherwise it is 16.
When `torch.utils.serialization.config.load.calculate_storage_offsets` is set we do the following
- We keep track of where the "cursor" is in the file using `current_offset`, after each persistent_load call, it will be at the offset where the header for the next record starts
- for the 0th storage, "data/0", we use the regular get_record_offset to determine the start of the storage
- for any other storage, (where the storages will be in order encountered by the unpickler, 0, 1, 2, 3, ...) we use `get_record_offset_no_read`, which re-uses the `getPadding` logic to determine the offset of the storage
- Note that `load_tensor` will only ever be called again with the same key if the storage's `._data_ptr()` is 0 [[pointer1](https://github.com/pytorch/pytorch/blob/main/torch/serialization.py#L1917-L1918)][[pointer2](https://github.com/pytorch/pytorch/blob/main/torch/serialization.py#L1936-L1937)], so we cache the offsets for this edge case
- After each storage, if the storage is non-zero, we account for the local dir footer based on the logic described above
## Testing strategy
The agreed upon testing strategy was as follows:
- Add debug code gated by an environment flag `TORCH_SERIALIZATION_DEBUG` that will run this offset calculation logic and verify it against getRecordOffset for each storage (when mmap=False)
- This flag is set throughout CI, which means that every time `torch.load` is called, the offset calculation logic is implicitly being tested.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143880
* #143879
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
Differential Revision: [D67673026](https://our.internmc.facebook.com/intern/diff/D67673026)
| true
|
2,760,141,879
|
Remove lexicographical sorting of storage keys in torch.save
|
mikaylagawarecki
|
closed
|
[
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 27
|
CONTRIBUTOR
|
Currently the order lexicographical (i.e. 0, 10, 11, ...19, 2, ....) instead of 0, 1, 2, 3, 4, 5 (the order that storage metadata is actually pickled in), since PyTorch will never be used with Python < 3.7 we can be assured that the keys will be read in the order of insertion (numerically sorted)
This makes it such that the order storages are written in are the same as the pickling/unpickling order so we can calculate their offsets with less random reads
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143880
* __->__ #143879
| true
|
2,760,111,010
|
[fr][c10d] fix flaky test
|
c-p-i-o
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143878
* #143865
Summary:
Test erroneously assumed that input/output sizes are same and that all
states are matchable.
Fixes issue #143798
Test Plan:
Test passes
Reviewers
Test passes
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,760,071,733
|
dont assign a size to _assert_scalar in partitioner
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: composability",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/143876
Open to other suggestions - we have an invariant that all nodes in our ATen graphs should have a `meta['val']` field, but I don't think this is actually true in all cases, so I just hardcoded the invariant to ignore `_assert_scalar()` (which is a "special" op used in dynamic shapes for runtime asserts, and doesn't have a meta['val'] field)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144097
* #141131
* #144438
* __->__ #143877
| true
|
2,760,059,559
|
`aten._assert_scalar` can hard error the partitioner
|
bdhirsh
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 0
|
CONTRIBUTOR
|
internal xref: https://fb.workplace.com/groups/1075192433118967/permalink/1567692087202330/
(second xref: https://fb.workplace.com/groups/1075192433118967/posts/1574136133224592/?comment_id=1575214129783459&reply_comment_id=1577334836238055)
I haven't been able to run the internal repro properly, but I did make a (hopefully representative) tiny OSS repro:
```
import torch
torch._dynamo.config.capture_dynamic_output_shape_ops = True
from torch._functorch import config
config.ban_recompute_not_in_allowlist = False
@torch.compile(backend="aot_eager")
def f(x):
y = x.nonzero()
tmp = torch.ones_like(y)
return x.sum() + tmp.sum()
x = torch.ones(4, requires_grad=True)
out = f(x)
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @zou3519 @yf225
| true
|
2,760,054,588
|
Use random64 in Fischer-Yates algorithm for large N (#143682)
|
ngimel
|
closed
|
[
"release notes: dataloader"
] | 1
|
COLLABORATOR
|
Fixes bug in randperm https://nbsanity.com/static/a4774194938414dedcec7d6e99727d31/Shuffling_20in_20torch_20vs_20numpy-public.html
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143682
Approved by: https://github.com/eqy, https://github.com/albanD
Fixes #ISSUE_NUMBER
| true
|
2,760,047,872
|
[Performance] Simple arithemtic operations are slower using MPS than Metal
|
malfet
|
closed
|
[
"module: performance",
"triaged",
"module: mps"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Reported by @swolchok and could be confirmed by running something like the following
```python
import torch
from timeit import default_timer
from torch.utils.benchmark import Measurement, Timer
def bench_binary(
n,
binary_func,
dtype=torch.float32,
) -> Measurement:
t = Timer(
stmt=f"f(x, y);f(x, y); f(x, y); torch.mps.synchronize()",
setup=f"x, y=torch.rand((2, {n}), dtype={dtype}, device='mps').unbind(0)",
globals = {'f': binary_func},
language="python", timer=default_timer
)
return t.blocked_autorange()
mps_lib = torch.mps._compile_shader("""
#include <metal_stdlib>
using namespace metal;
template<typename T>
kernel void add(constant T* x,
constant T* y,
device T* out,
uint index [[thread_position_in_grid]])
{
out[index] = static_cast<T>(x[index] + y[index]);
}
template [[host_name("add_float")]] kernel void add(constant float*, constant float*, device float*, uint);
template [[host_name("add_half")]] kernel void add(constant half*, constant half*, device half*, uint);
template [[host_name("add_bfloat")]] kernel void add(constant bfloat*, constant bfloat*, device bfloat*, uint);
""")
def metal_add(x, y):
rc = torch.empty_like(x)
{ torch.float: mps_lib.add_float,
torch.half: mps_lib.add_half,
torch.bfloat16: mps_lib.add_bfloat }[x.dtype](x, y, rc)
return rc
if __name__ == "__main__":
n = 1024**2
for dtype in [torch.float32, torch.float16, torch.bfloat16]:
# Validate correctness first
inp = torch.rand(2, n, dtype=dtype, device="mps").unbind(0)
out = torch.add(*inp)
out_metal = metal_add(*inp)
if not torch.allclose(out, out_metal):
raise RuntimeError(f"out-out_metal.abs().max() is {(out-out_metal).abs().max().item()} for {dtype}")
eager_t = bench_binary(n, torch.add, dtype)
metal_t = bench_binary(n, metal_add, dtype)
use_msec = eager_t.mean > 1e-4 or metal_t.mean > 1e-4
multiplier = 1e3 if use_msec else 1e6
uname = "msec" if use_msec else "usec"
print(f"torch.add()x3 {str(dtype):>14} {eager_t.mean*multiplier:>7.2f} {uname} metal_add()x3: {metal_t.mean*multiplier:>7.2f} {uname} speedup: {eager_t.mean/metal_t.mean:>7.2f}")
```
On M1 pro Metal implementation of the same shader runs 20% faster than MPS one for 1 million elements
```
torch.add()x3 torch.float32 0.53 msec metal_add()x3: 0.42 msec speedup: 1.27
torch.add()x3 torch.float16 0.45 msec metal_add()x3: 0.37 msec speedup: 1.21
torch.add()x3 torch.bfloat16 0.44 msec metal_add()x3: 0.37 msec speedup: 1.19
```
More involved example can be seen here: https://github.com/pytorch/pytorch/pull/143656
### Versions
2.5.1, nightly
cc @msaroufim @kulinseth @albanD @DenisVieriu97 @jhavukainen
| true
|
2,760,031,393
|
use statically known true over guards in tensor view ops
|
bobrenjc93
|
closed
|
[
"ciflow/trunk",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143873
internal xref: https://fb.workplace.com/groups/1075192433118967/posts/1570866680218204/
Differential Revision: [D67651945](https://our.internmc.facebook.com/intern/diff/D67651945)
| true
|
2,760,030,184
|
[FlexAttention] make bm creation cuda-graphable
|
drisspg
|
closed
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"module: flex attention"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143872
# Summary
Addresses: https://github.com/pytorch/pytorch/issues/143840
Current dynamic failing test: test/inductor/test_flex_attention.py::TestBlockMask::test_compiling_create_block_mask_no_recompile - torch._dynamo.exc.TorchRuntimeError: Failed running call_method scatter_(*(BatchedTensor(lvl=2,...
with
CC @zou3519 for ideas on why this failing
``` Shell
File "/home/drisspg/meta/pytorch/torch/_dynamo/utils.py", line 2694, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/home/drisspg/meta/pytorch/torch/_dynamo/utils.py", line 2678, in run_node
return getattr(args[0], node.target)(*args[1:], **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Failed running call_method scatter_(*(BatchedTensor(lvl=2, bdim=0, value=
BatchedTensor(lvl=1, bdim=0, value=
FakeTensor(..., device='cuda:0',
size=(s0, s1, (s2 + 127//128), ((s3 + 127//128)) + 1),
dtype=torch.int32)
)
), 1, BatchedTensor(lvl=2, bdim=0, value=
BatchedTensor(lvl=1, bdim=0, value=
FakeTensor(..., device='cuda:0',
size=(s0, s1, (s2 + 127//128), (s3 + 127//128)), dtype=torch.int64)
)
), BatchedTensor(lvl=2, bdim=0, value=
BatchedTensor(lvl=1, bdim=0, value=
FakeTensor(..., device='cuda:0',
size=(s0, s1, (s2 + 127//128), (s3 + 127//128)), dtype=torch.int32)
)
)), **{}):
Cannot call sizes() on tensor with symbolic sizes/strides
Exception raised from throw_cannot_call_with_symbolic at /home/drisspg/meta/pytorch/c10/core/TensorImpl.cpp:291 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x98 (0x7fc0fd78fbe8 in /home/drisspg/meta/pytorch/torch/lib/libc10.so)
frame #1: c10::TensorImpl::throw_cannot_call_with_symbolic(char const*) const + 0x8d (0x7fc0fd738181 in /home/drisspg/meta/pytorch/torch/lib/libc10.so)
frame #2: at::functorch::BatchedTensorImpl::sizes_custom() const + 0x5c (0x7fc0ec1a0e0c in /home/drisspg/meta/pytorch/torch/lib/libtorch_cpu.so)
frame #3: <unknown function> + 0x179d11f (0x7fc0ec19d11f in /home/drisspg/meta/pytorch/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x64036c (0x7fc0fde4036c in /home/drisspg/meta/pytorch/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0x63ceed (0x7fc0fde3ceed in /home/drisspg/meta/pytorch/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0x17c145b (0x7fc0ec1c145b in /home/drisspg/meta/pytorch/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x17ab401 (0x7fc0ec1ab401 in /home/drisspg/meta/pytorch/torch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x17a614c (0x7fc0ec1a614c in /home/drisspg/meta/pytorch/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x64036c (0x7fc0fde4036c in /home/drisspg/meta/pytorch/torch/lib/libtorch_python.so)
frame #10: <unknown function> + 0x63ceed (0x7fc0fde3ceed in /home/drisspg/meta/pytorch/torch/lib/libtorch_python.so)
frame #11: at::_ops::scatter__src::call(at::Tensor&, long, at::Tensor const&, at::Tensor const&) + 0x3d1 (0x7fc0ecec4281 in /home/drisspg/meta/pytorch/torch/lib/libtorch_cpu.so)
frame #12: <unknown function> + 0x41b9c2 (0x7fc0fdc1b9c2 in /home/drisspg/meta/pytorch/torch/lib/libtorch_python.so)
frame #13: <unknown function> + 0x2240a8 (0x55944f3cc0a8 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #14: _PyObject_Call + 0xb5 (0x55944f3dcb35 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #15: <unknown function> + 0x11350a (0x55944f2bb50a in /home/drisspg/.conda/envs/dev/bin/python3)
frame #16: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #17: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #18: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #19: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #20: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #21: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #22: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #23: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #24: <unknown function> + 0x11350a (0x55944f2bb50a in /home/drisspg/.conda/envs/dev/bin/python3)
frame #25: _PyObject_Call + 0x12b (0x55944f3dcbab in /home/drisspg/.conda/envs/dev/bin/python3)
frame #26: <unknown function> + 0x11350a (0x55944f2bb50a in /home/drisspg/.conda/envs/dev/bin/python3)
frame #27: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #28: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #29: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #30: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #31: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #32: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #33: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #34: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #35: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #36: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #37: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #38: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #39: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #40: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #41: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #42: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #43: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #44: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #45: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #46: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #47: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #48: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #49: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #50: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #51: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #52: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #53: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #54: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #55: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #56: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #57: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #58: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #59: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #60: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
frame #61: PyObject_Vectorcall + 0x2e (0x55944f3c0cbe in /home/drisspg/.conda/envs/dev/bin/python3)
frame #62: <unknown function> + 0x112892 (0x55944f2ba892 in /home/drisspg/.conda/envs/dev/bin/python3)
from user code:
File "/home/drisspg/meta/pytorch/torch/nn/attention/flex_attention.py", line 890, in create_block_mask
block_mask = _create_sparse_block_from_block_mask(
File "/home/drisspg/meta/pytorch/torch/nn/attention/flex_attention.py", line 762, in _create_sparse_block_from_block_mask
return BlockMask.from_kv_blocks(
File "/home/drisspg/meta/pytorch/torch/nn/attention/flex_attention.py", line 350, in from_kv_blocks
q_num_blocks, q_indices = _transpose_ordered(kv_num_blocks, kv_indices)
File "/home/drisspg/meta/pytorch/torch/nn/attention/flex_attention.py", line 184, in _transpose_ordered
dense = _ordered_to_dense(num_blocks_in_row, col_indices)
File "/home/drisspg/meta/pytorch/torch/nn/attention/flex_attention.py", line 169, in _ordered_to_dense
out = create_dense_batched(num_blocks_in_row, col_indices)
File "/home/drisspg/meta/pytorch/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "/home/drisspg/meta/pytorch/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "/home/drisspg/meta/pytorch/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/home/drisspg/meta/pytorch/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "/home/drisspg/meta/pytorch/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "/home/drisspg/meta/pytorch/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/home/drisspg/meta/pytorch/torch/nn/attention/flex_attention.py", line 162, in create_dense_one
dense_mask.scatter_(1, valid_indices.to(torch.int64), values)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestBlockMask.test_compiling_create_block_mask_no_recompile
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @Chillee @yanboliang @BoyuanFeng
| true
|
2,760,011,236
|
remove allow-untyped-defs from torch/distributed/pipelining/_debug.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143871
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,760,011,176
|
remove allow-untyped-defs from _inductor/codegen/rocm/rocm_template_buffer.py
|
bobrenjc93
|
closed
|
[
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 5
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143870
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,760,011,123
|
remove allow-untyped-defs from distributed/elastic/multiprocessing/errors/handlers.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (torchelastic)"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143869
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,760,011,088
|
remove allow-untyped-defs from fx/experimental/refinement_types.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx"
] | 10
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143868
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,760,011,032
|
remove allow-untyped-defs from torch/ao/quantization/experimental/APoT_tensor.py
|
bobrenjc93
|
closed
|
[
"release notes: quantization",
"release notes: AO frontend"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143873
* #143871
* #143870
* #143869
* #143868
* __->__ #143867
| true
|
2,759,989,549
|
Fix batch-specific attention mod for NJT + Flex
|
jbschlosser
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: bug fixes",
"release notes: nested tensor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143866
Fixes #143788
| true
|
2,759,977,242
|
[fr][c10d] log trace capture enabled or not in flight recorder
|
c-p-i-o
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143878
* __->__ #143865
Summary:
Refactor logging for flight recorder so we can log if the capture was
with or without stack trace capture enabled.
We introduce a new column ('trace_enabled') in the logger.
Test Plan:
Tested on local job and noted that correct output was produced.
Internal link: https://fburl.com/scuba/c10d_flight_recorder/ulhqnmhg
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
2,759,949,743
|
Adaptive pool MPS
|
sebassaras02
|
open
|
[
"triaged",
"enhancement",
"module: pooling",
"module: mps"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
Hello, I've been trying to train a VGG architecture over a M3 chip.
I have this mistake:
RuntimeError: Adaptive pool MPS: output sizes must be divisible by input sizes. Non-divisible input sizes are not implemented on MPS device yet. For now, you can manually transfer tensor to cpu in this case. Please refer to [this issue](https://github.com/pytorch/pytorch/issues/96056)
### Alternatives
_No response_
### Additional context
_No response_
cc @mikaylagawarecki @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,759,820,850
|
Composite RoPE gives ridiculous profiling trace
|
Mmmofan
|
closed
|
[
"triaged"
] | 1
|
NONE
|
### 🐛 Describe the bug
As I described in https://discuss.pytorch.org/t/composite-rope-backward-gives-a-large-tocopybackward0-in-profiling-trace/214668 , this code outputs a ridiculous trace json:
```python
#!/usr/bin/env python
# encoding: utf-8
import torch
from torch.nn import functional as F
import time
import os
from functools import partial
import torch
import torch.distributed as dist
from torch.profiler import (
profile,
ProfilerActivity,
schedule,
)
def trace_handler(profiler, file_path, op_name):
file_path = os.path.join(file_path, f"profiling-{op_name}.trace.json")
profiler.export_chrome_trace(file_path)
def get_profiler(file_path, op_name):
warmup = 5
profile_schedule = schedule(wait=2, warmup=warmup, active=1)
profiler = profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
schedule=profile_schedule,
record_shapes=True,
on_trace_ready=partial(trace_handler, file_path=file_path, op_name=op_name),
with_flops=True,
profile_memory=True,
with_modules=True,
)
return profiler
def rotate_half(t: torch.Tensor) -> torch.Tensor:
t_1, t_2 = torch.chunk(t, 2, dim=-1)
return torch.cat((-t_2, t_1), dim=-1)
def apply_rotary_pos_emb_bshd(t: torch.Tensor, freqs: torch.Tensor):
rot_dim = freqs.shape[-1]
# ideally t_pass is empty so rotary pos embedding is applied to all tensor t
t, t_pass = t[..., :rot_dim], t[..., rot_dim:]
# first part is cosine component
# second part is sine component, need to change signs with _rotate_half method
cos_ = torch.cos(freqs).to(t.dtype)
sin_ = torch.sin(freqs).to(t.dtype)
t = (t * cos_) + (rotate_half(t) * sin_)
return torch.cat((t, t_pass), dim=-1)
def test_ops(op_func, op_name, in_params: dict):
# for warm up
out = op_func(**in_params)
loss = out.sum()
loss.backward()
profiler = get_profiler("/workspace", op_name)
test_iters = 10
torch.cuda.synchronize()
start = time.time()
with profiler as prof:
for _ in range(test_iters):
out = op_func(**in_params)
loss = out.sum()
loss.backward()
prof.step()
torch.cuda.synchronize()
using_time = time.time() - start
print(f'{op_name} \t cost: {using_time}')
def test_rope():
max_seq_len = 4096
batch_size = 10
head_num = 32
dim = 128 * 32
dim = dim // head_num
input_shape = (max_seq_len, batch_size, head_num, dim)
input_ts = torch.randn(input_shape, dtype=torch.float32, requires_grad=True)
freqs_cis_i = torch.randn(max_seq_len, dim)
freqs_cis_4d = freqs_cis.reshape(max_seq_len, 1, 1, dim)
input_data_out_F = {
"t": input_ts.cuda(),
"freqs": freqs_cis_4d.cuda()
}
test_ops(op_func=apply_rotary_pos_emb_bshd,
op_name="rope",
in_params=input_data_out_F,
)
if __name__ == '__main__':
test_rope()
```
The trace looks like
<img width="1304" alt="image" src="https://github.com/user-attachments/assets/3d3214b2-8fdd-480f-b5a5-9c11e7f7b82b" />
### Versions
CUDA: 12.2
PyTorch: v2.4.0
System: Ubuntu22.04
| true
|
2,759,745,960
|
pytorch v2.2.2 build for nvidia jetson orin nano 8GB
|
lida2003
|
closed
|
[
"module: build",
"triaged",
"module: jetson"
] | 2
|
NONE
|
### 🐛 Describe the bug
pytorch v2.2.2 build for nvidia jetson orin 8GB
Previous discussion here FYI: https://forums.developer.nvidia.com/t/request-build-script-for-pytorch-or-up-to-date-pytorh-binary-release-supporting-jetson-boards-running-l4t35-6-ubuntu20-04/316972/12
```
commit 39901f229520a5256505ec24782f716ee7ddc843 (HEAD, tag: v2.2.2-rc3, tag: v2.2.2, origin/release/2.2)
Author: pytorchbot <soumith+bot@pytorch.org>
Date: Mon Mar 25 14:33:04 2024 -0700
Fix lower precision check for MKLDNN on Windows (#122645)
Fixes #120788
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121618
Approved by: https://github.com/xuhancn, https://github.com/jgong5, https://github.com/mingfeima, https://github.com/seemethere
(cherry picked from commit 03717430cc54609189cc7df593b2c96a99fb7f55)
Co-authored-by: CaoE <e.cao@intel.com>
```
```
Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.4 [L4T 35.6.0]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
- P-Number: p3767-0005
- Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
- Distribution: Ubuntu 20.04 focal
- Release: 5.10.216-tegra
jtop:
- Version: 4.2.12
- Service: Active
Libraries:
- CUDA: 11.4.315
- cuDNN: 8.6.0.166
- TensorRT: 8.5.2.2
- VPI: 2.4.8
- OpenCV: 4.9.0 - with CUDA: YES
DeepStream C/C++ SDK version: 6.3
Python Environment:
Python 3.8.10
GStreamer: YES (1.16.3)
NVIDIA CUDA: YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
OpenCV version: 4.9.0 CUDA True
YOLO version: 8.3.33
Torch version: 2.1.0a0+41361538.nv23.06
Torchvision version: 0.16.1+fdea156
DeepStream SDK version: 1.1.8
```
### Error logs
```
[4405/5756] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/home/daniel/Work/pytorch_v2.2.2/build/aten/src -I/home/daniel/Work/pytorch_v2.2.2/aten/src -I/home/daniel/Work/pytorch_v2.2.2/build -I/home/daniel/Work/pytorch_v2.2.2 -I/home/daniel/Work/pytorch_v2.2.2/cmake/../third_party/benchmark/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/onnx -I/home/daniel/Work/pytorch_v2.2.2/build/third_party/onnx -I/home/daniel/Work/pytorch_v2.2.2/third_party/foxi -I/home/daniel/Work/pytorch_v2.2.2/build/third_party/foxi -I/home/daniel/Work/pytorch_v2.2.2/torch/csrc/api -I/home/daniel/Work/pytorch_v2.2.2/torch/csrc/api/include -I/home/daniel/Work/pytorch_v2.2.2/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.2.2/build/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.2.2/build/caffe2/aten/src -I/home/daniel/Work/pytorch_v2.2.2/build/caffe2/../aten/src -I/home/daniel/Work/pytorch_v2.2.2/torch/csrc -I/home/daniel/Work/pytorch_v2.2.2/third_party/miniz-2.1.0 -I/home/daniel/Work/pytorch_v2.2.2/third_party/kineto/libkineto/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/kineto/libkineto/src -I/home/daniel/Work/pytorch_v2.2.2/aten/src/ATen/.. -I/home/daniel/Work/pytorch_v2.2.2/third_party/FXdiv/include -I/home/daniel/Work/pytorch_v2.2.2/c10/.. -I/home/daniel/Work/pytorch_v2.2.2/third_party/pthreadpool/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/cpuinfo/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/NNPACK/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/FP16/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/fmt/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/flatbuffers/include -isystem /home/daniel/Work/pytorch_v2.2.2/cmake/../third_party/googletest/googlemock/include -isystem /home/daniel/Work/pytorch_v2.2.2/cmake/../third_party/googletest/googletest/include -isystem /home/daniel/Work/pytorch_v2.2.2/third_party/protobuf/src -isystem /home/daniel/Work/pytorch_v2.2.2/third_party/XNNPACK/include -isystem /home/daniel/Work/pytorch_v2.2.2/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/daniel/Work/pytorch_v2.2.2/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -D__NEON__ -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-unused-function -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-strict-overflow -Wno-strict-aliasing -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o -c /home/daniel/Work/pytorch_v2.2.2/build/aten/src/ATen/Operators_1.cpp
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
[4406/5756] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/home/daniel/Work/pytorch_v2.2.2/build/aten/src -I/home/daniel/Work/pytorch_v2.2.2/aten/src -I/home/daniel/Work/pytorch_v2.2.2/build -I/home/daniel/Work/pytorch_v2.2.2 -I/home/daniel/Work/pytorch_v2.2.2/cmake/../third_party/benchmark/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/onnx -I/home/daniel/Work/pytorch_v2.2.2/build/third_party/onnx -I/home/daniel/Work/pytorch_v2.2.2/third_party/foxi -I/home/daniel/Work/pytorch_v2.2.2/build/third_party/foxi -I/home/daniel/Work/pytorch_v2.2.2/torch/csrc/api -I/home/daniel/Work/pytorch_v2.2.2/torch/csrc/api/include -I/home/daniel/Work/pytorch_v2.2.2/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.2.2/build/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.2.2/build/caffe2/aten/src -I/home/daniel/Work/pytorch_v2.2.2/build/caffe2/../aten/src -I/home/daniel/Work/pytorch_v2.2.2/torch/csrc -I/home/daniel/Work/pytorch_v2.2.2/third_party/miniz-2.1.0 -I/home/daniel/Work/pytorch_v2.2.2/third_party/kineto/libkineto/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/kineto/libkineto/src -I/home/daniel/Work/pytorch_v2.2.2/aten/src/ATen/.. -I/home/daniel/Work/pytorch_v2.2.2/third_party/FXdiv/include -I/home/daniel/Work/pytorch_v2.2.2/c10/.. -I/home/daniel/Work/pytorch_v2.2.2/third_party/pthreadpool/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/cpuinfo/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/NNPACK/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/FP16/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/fmt/include -I/home/daniel/Work/pytorch_v2.2.2/third_party/flatbuffers/include -isystem /home/daniel/Work/pytorch_v2.2.2/cmake/../third_party/googletest/googlemock/include -isystem /home/daniel/Work/pytorch_v2.2.2/cmake/../third_party/googletest/googletest/include -isystem /home/daniel/Work/pytorch_v2.2.2/third_party/protobuf/src -isystem /home/daniel/Work/pytorch_v2.2.2/third_party/XNNPACK/include -isystem /home/daniel/Work/pytorch_v2.2.2/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/daniel/Work/pytorch_v2.2.2/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -D__NEON__ -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-unused-function -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-strict-overflow -Wno-strict-aliasing -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o -c /home/daniel/Work/pytorch_v2.2.2/build/aten/src/ATen/RegisterCPU.cpp
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
[4412/5756] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_2.cpp.o
ninja: build stopped: subcommand failed.
```
### Versions
```
daniel@daniel-nvidia:~/Work/pytorch$ python3 collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 7 2024, 13:10:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.216-tegra-aarch64-with-glibc2.29
Is CUDA available: N/A
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Thread(s) per core: 1
Core(s) per socket: 3
Socket(s): 2
Vendor ID: ARM
Model: 1
Model name: ARMv8 Processor rev 1 (v8l)
Stepping: r0p1
CPU max MHz: 1510.4000
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 1.5 MiB
L3 cache: 2 MiB
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.3.12
[pip3] onnxruntime==1.16.3
[pip3] onnxruntime-gpu==1.17.0
[pip3] onnxslim==0.1.36
[pip3] optree==0.13.1
[pip3] torch==2.1.0a0+41361538.nv23.6
[pip3] torch2trt==0.5.0
[pip3] torchvision==0.16.1
[conda] Could not collect
```
cc @malfet @seemethere @ptrblck @puririshi98 @chauhang @penguinwu
| true
|
2,759,696,192
|
_transform_bias_rescale_qkv cpu op get error on debug build
|
garfield1997
|
open
|
[
"module: nn",
"triaged"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The following code will produce the following error
code
```
import torch
qkv = torch.randn([4, 16, 576])
qkv_bias = torch.randn([576])
num_heads=4
torch._transform_bias_rescale_qkv(
qkv, qkv_bias, num_heads
)
```
output
```
Traceback (most recent call last):
File "/workspace/testops.py", line 7, in <module>
torch._transform_bias_rescale_qkv(
RuntimeError: t.storage().use_count() == 1 INTERNAL ASSERT FAILED at "/workspace/pytorch/torch/csrc/autograd/autograd_not_implemented_fallback.cpp":413, please report a bug to PyTorch.
```
### Versions
nightly hash b74622335a2c4776fa654939ec89bf1ef45b8a2f
(pytorch) root@bjys1040:/workspace# python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+gitb746223
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.8 (main, Nov 6 2024, 16:44:26) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 104
On-line CPU(s) list: 0-103
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 26
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1000.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.6 MiB (52 instances)
L1i cache: 1.6 MiB (52 instances)
L2 cache: 52 MiB (52 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-25,52-77
NUMA node1 CPU(s): 26-51,78-103
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.13.1
[pip3] torch==2.6.0.dev20241215+cpu
[pip3] torchaudio==2.6.0.dev20241215+cpu
[pip3] torchvision==0.22.0.dev20241215+cpu
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,759,665,288
|
MPSNDArray 限制了单个 NDArray 的内存大小上限为 4GB
|
OutisLi
|
open
|
[
"needs reproduction",
"module: crash",
"triaged",
"module: 64-bit",
"module: mps"
] | 2
|
NONE
|
### 🐛 Describe the bug
/AppleInternal/Library/BuildRoots/b11baf73-9ee0-11ef-b7b4-7aebe1f78c73/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:850: failed assertion `[MPSNDArray initWithDevice:descriptor:isTextureBacked:] Error: total bytes of NDArray > 2**32'
[1] 13512 abort /opt/homebrew/Caskroom/miniforge/base/envs/pyTrim/bin/python
/opt/homebrew/Caskroom/miniforge/base/envs/pyTrim/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
However, my program only consumes little ram, but with several very large tensor, the total ram is enough. How to overcome this limit?
### Versions
pytorch 2.5.1 cpu_generic_py312h99d64c8_6 conda-forge
I am using mac mini with m4Pro, 64G RAM, macos15.2,
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,759,644,273
|
Make init_method deprecated to fix TCP connection refused error
|
taozhiwei
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"topic: not user facing",
"module: inductor"
] | 11
|
CONTRIBUTOR
|
```
import os
os.environ["TORCH_CPP_LOG_LEVEL"] = "INFO"
os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL"
import torch
import torch.distributed as dist
def main():
rank = int(os.environ["RANK"]) if "RANK" in os.environ else 0
world_size = int(
os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1
dist.init_process_group("gloo", rank=rank, world_size=world_size,init_method="tcp://localhost:35980")
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
Save the above code to `test_init_process.py,` then run `torchrun --nproc_per_node=4 test_init_process.py` will result in the following errors`[I1226 11:25:16.460453259 socket.cpp:919] [c10d - trace] The server socket on localhost:35980 is not yet listening (errno: 111 - Connection refused), will retry.`
you must use `torchrun --nproc_per_node=4 --rdzv-endpoint localhost:35980 test_init_process.py` can be executed normally.
Because the default IP and port set by [https://github.com/pytorch/pytorch/blob/v2.6.0-rc3/torch/distributed/run.py#L597-L616](https://github.com/pytorch/pytorch/blob/v2.6.0-rc3/torch/distributed/run.py#L597-L616) are not consistent with the settings the parameter init_method of method init_process_group.
BTW,The default IP and port settings will prevent [this code](https://github.com/pytorch/pytorch/blob/v2.6.0-rc3/torch/distributed/launcher/api.py#L170-L173) from running,Should we remove the default IP and port settings?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,759,631,923
|
`@torch.jit.script` causes `pytest-cov` to miss function body
|
anvdn
|
open
|
[
"oncall: jit",
"feature"
] | 1
|
NONE
|
### 🐛 Describe the bug
When decorating a function with `@torch.jit.script`, its body's code coverage is ignored by `pytest-cov`. Even with exhaustive testing, the coverage report always considered the function code as uncovered.
### Instructions to reproduce
```
root/
│
├── ml_framework/
│ └── module.py
│
└── tests/
└── test_module.py
```
`module.py`
```python
import torch
@torch.jit.script
def function() -> int:
return 0
```
`test_module.py`
```
from unittest import TestCase
from module import function
class TestModule(TestCase):
def test_function(self) -> None:
self.assertEqual(0, function())
```
Run:
```
pytest --cov=ml_framework.module test_module.py --cov-report html cov/
```
- with the decorator (the function body is tested hence should appear as covered)
<img width="400" alt="image" src="https://github.com/user-attachments/assets/c1b2e28e-b61b-4126-a2ac-b38b8a511844" />
- without the decorator
<img width="400" alt="image" src="https://github.com/user-attachments/assets/201d21e1-e822-4daa-8bb8-e41748e45de5" />
### Versions
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (x86_64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.7 (main, May 15 2024, 22:19:42) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-14.6.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-boto3-ecr==1.35.21
[pip3] mypy-boto3-s3==1.35.76.post1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.2.2+cpu
[pip3] torchmetrics==1.4.2
[pip3] torchvision==0.17.2+cpu
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,759,624,458
|
pytorch v2.3.1 build for nvidia jetson orin nano 8GB
|
lida2003
|
closed
|
[
"module: build",
"module: jetson"
] | 1
|
NONE
|
### 🐛 Describe the bug
pytorch v2.3.1 build for nvidia jetson orin 8GB
Previous discussion here FYI: https://forums.developer.nvidia.com/t/request-build-script-for-pytorch-or-up-to-date-pytorh-binary-release-supporting-jetson-boards-running-l4t35-6-ubuntu20-04/316972/12
```
$ git log -n 1
commit 63d5e9221bedd1546b7d364b5ce4171547db12a9 (HEAD, tag: v2.3.1, origin/release/2.3)
Author: pytorchbot <soumith+bot@pytorch.org>
Date: Wed May 29 08:15:01 2024 -0700
[EZ] Pin scipy to 1.12 for Py-3.12 (#127322)
[EZ] Pin scipy to 1.12 for Py-3.12 (#123795)
This caused false positive failures/reverts for https://github.com/pytorch/pytorch/pull/123689 and https://github.com/pytorch/pytorch/pull/123595
Fixes https://github.com/pytorch/pytorch/issues/123655
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123795
Approved by: https://github.com/huydhn
(cherry picked from commit 2a597cfd2c63459dd303cf7922eb4c3750a76e75)
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
```
```
Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.4 [L4T 35.6.0]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
- P-Number: p3767-0005
- Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
- Distribution: Ubuntu 20.04 focal
- Release: 5.10.216-tegra
jtop:
- Version: 4.2.12
- Service: Active
Libraries:
- CUDA: 11.4.315
- cuDNN: 8.6.0.166
- TensorRT: 8.5.2.2
- VPI: 2.4.8
- OpenCV: 4.9.0 - with CUDA: YES
DeepStream C/C++ SDK version: 6.3
Python Environment:
Python 3.8.10
GStreamer: YES (1.16.3)
NVIDIA CUDA: YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
OpenCV version: 4.9.0 CUDA True
YOLO version: 8.3.33
Torch version: 2.1.0a0+41361538.nv23.06
Torchvision version: 0.16.1+fdea156
DeepStream SDK version: 1.1.8
```
### Error logs
- LOG 1: first build
```
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/home/daniel/Work/pytorch_v2.3.1/build/aten/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build -I/home/daniel/Work/pytorch_v2.3.1 -I/home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/benchmark/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api/include -I/home/daniel/Work/pytorch_v2.3.1/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/../aten/src -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc -I/home/daniel/Work/pytorch_v2.3.1/third_party/miniz-2.1.0 -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/FXdiv/include -I/home/daniel/Work/pytorch_v2.3.1/c10/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/pthreadpool/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/cpuinfo/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/NNPACK/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/FP16/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/fmt/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/flatbuffers/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googlemock/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googletest/include -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/protobuf/src -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/XNNPACK/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/daniel/Work/pytorch_v2.3.1/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -D__NEON__ -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-unused-function -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-strict-overflow -Wno-strict-aliasing -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o -c /home/daniel/Work/pytorch_v2.3.1/build/aten/src/ATen/Operators_1.cpp
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
[5192/6660] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_2.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_2.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/home/daniel/Work/pytorch_v2.3.1/build/aten/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build -I/home/daniel/Work/pytorch_v2.3.1 -I/home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/benchmark/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api/include -I/home/daniel/Work/pytorch_v2.3.1/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/../aten/src -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc -I/home/daniel/Work/pytorch_v2.3.1/third_party/miniz-2.1.0 -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/FXdiv/include -I/home/daniel/Work/pytorch_v2.3.1/c10/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/pthreadpool/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/cpuinfo/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/NNPACK/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/FP16/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/fmt/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/flatbuffers/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googlemock/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googletest/include -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/protobuf/src -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/XNNPACK/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/daniel/Work/pytorch_v2.3.1/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -D__NEON__ -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-unused-function -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-strict-overflow -Wno-strict-aliasing -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_2.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_2.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_2.cpp.o -c /home/daniel/Work/pytorch_v2.3.1/build/aten/src/ATen/Operators_2.cpp
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
[5193/6660] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/home/daniel/Work/pytorch_v2.3.1/build/aten/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build -I/home/daniel/Work/pytorch_v2.3.1 -I/home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/benchmark/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api/include -I/home/daniel/Work/pytorch_v2.3.1/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/../aten/src -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc -I/home/daniel/Work/pytorch_v2.3.1/third_party/miniz-2.1.0 -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/FXdiv/include -I/home/daniel/Work/pytorch_v2.3.1/c10/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/pthreadpool/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/cpuinfo/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/NNPACK/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/FP16/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/fmt/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/flatbuffers/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googlemock/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googletest/include -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/protobuf/src -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/XNNPACK/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/daniel/Work/pytorch_v2.3.1/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -D__NEON__ -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-unused-function -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-strict-overflow -Wno-strict-aliasing -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o -c /home/daniel/Work/pytorch_v2.3.1/build/aten/src/ATen/RegisterCPU.cpp
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
[5198/6660] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_3.cpp.o
ninja: build stopped: subcommand failed.
```
- LOG 2: second build
```
Building wheel torch-2.3.1
-- Building version 2.3.1
cmake --build . --target install --config Release
[2/1464] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/home/daniel/Work/pytorch_v2.3.1/build/aten/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build -I/home/daniel/Work/pytorch_v2.3.1 -I/home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/benchmark/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api/include -I/home/daniel/Work/pytorch_v2.3.1/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/../aten/src -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc -I/home/daniel/Work/pytorch_v2.3.1/third_party/miniz-2.1.0 -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/FXdiv/include -I/home/daniel/Work/pytorch_v2.3.1/c10/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/pthreadpool/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/cpuinfo/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/NNPACK/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/FP16/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/fmt/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/flatbuffers/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googlemock/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googletest/include -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/protobuf/src -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/XNNPACK/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/daniel/Work/pytorch_v2.3.1/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -D__NEON__ -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-unused-function -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-strict-overflow -Wno-strict-aliasing -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_1.cpp.o -c /home/daniel/Work/pytorch_v2.3.1/build/aten/src/ATen/Operators_1.cpp
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
[3/1464] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/home/daniel/Work/pytorch_v2.3.1/build/aten/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build -I/home/daniel/Work/pytorch_v2.3.1 -I/home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/benchmark/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/onnx -I/home/daniel/Work/pytorch_v2.3.1/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/build/third_party/foxi -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc/api/include -I/home/daniel/Work/pytorch_v2.3.1/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src/TH -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/aten/src -I/home/daniel/Work/pytorch_v2.3.1/build/caffe2/../aten/src -I/home/daniel/Work/pytorch_v2.3.1/torch/csrc -I/home/daniel/Work/pytorch_v2.3.1/third_party/miniz-2.1.0 -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/kineto/libkineto/src -I/home/daniel/Work/pytorch_v2.3.1/aten/src/ATen/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/FXdiv/include -I/home/daniel/Work/pytorch_v2.3.1/c10/.. -I/home/daniel/Work/pytorch_v2.3.1/third_party/pthreadpool/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/cpuinfo/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/NNPACK/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/FP16/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/fmt/include -I/home/daniel/Work/pytorch_v2.3.1/third_party/flatbuffers/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googlemock/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/googletest/googletest/include -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/protobuf/src -isystem /home/daniel/Work/pytorch_v2.3.1/third_party/XNNPACK/include -isystem /home/daniel/Work/pytorch_v2.3.1/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/daniel/Work/pytorch_v2.3.1/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -D__NEON__ -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-unused-function -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-strict-overflow -Wno-strict-aliasing -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/RegisterCPU.cpp.o -c /home/daniel/Work/pytorch_v2.3.1/build/aten/src/ATen/RegisterCPU.cpp
c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
[9/1464] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/Operators_2.cpp.o
ninja: build stopped: subcommand failed.
```
### Versions
```
daniel@daniel-nvidia:~/Work/pytorch$ python3 collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.0
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 7 2024, 13:10:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.216-tegra-aarch64-with-glibc2.29
Is CUDA available: N/A
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Thread(s) per core: 1
Core(s) per socket: 3
Socket(s): 2
Vendor ID: ARM
Model: 1
Model name: ARMv8 Processor rev 1 (v8l)
Stepping: r0p1
CPU max MHz: 1510.4000
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 1.5 MiB
L3 cache: 2 MiB
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.3.12
[pip3] onnxruntime==1.16.3
[pip3] onnxruntime-gpu==1.17.0
[pip3] onnxslim==0.1.36
[pip3] optree==0.13.1
[pip3] torch==2.1.0a0+41361538.nv23.6
[pip3] torch2trt==0.5.0
[pip3] torchvision==0.16.1
[conda] Could not collect
```
cc @malfet @seemethere @ptrblck @puririshi98 @chauhang @penguinwu
| true
|
2,759,594,987
|
Fix _create_c10d_store error
|
taozhiwei
|
closed
|
[
"oncall: distributed",
"module: rocm",
"module: cpu",
"release notes: releng",
"fx",
"module: inductor",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @SherlockNoMad @EikanWang @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @zhuhaozhe @blzheng @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn
| true
|
2,759,484,185
|
Can't script a tensorrt model
|
He1pa
|
open
|
[
"oncall: jit"
] | 2
|
NONE
|
### 🐛 Describe the bug
I am a newbie for pytorch. I try to use tensorrt to optimize the model and save it as trt engine(*.plan). I tried the following:
torch -> trt model -> torch script -> trt engine
try to script a tensorrt model
```
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x: torch.Tensor):
x = x * 2
return x
if __name__ == '__main__':
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = Model().eval().cuda()
trt_model = torch.compile(model, backend="tensorrt")
script_model = torch.jit.script(trt_model)
script_model.save("script_model.ts")
trt_engine = torch_tensorrt.ts.convert_method_to_trt_engine(script_model, inputs= [torch_tensorrt.Input((1,10))])
with open(f"trt_engine.plan", 'wb') as f:
f.write(trt_engine)
print("model plan saved")
```
but in `script_model = torch.jit.script(trt_model)`
get error
> Traceback (most recent call last):
File "/ossfs/workspace/MetaGR/trt/build_trt_engine.py", line 138, in <module>
script_model = torch.jit.script(trt_model)
File "/opt/conda/envs/*/lib/python3.10/site-packages/torch/jit/_script.py", line 1429, in script
ret = _script_impl(
File "/opt/conda/envs/*/lib/python3.10/site-packages/torch/jit/_script.py", line 1147, in _script_impl
return torch.jit._recursive.create_script_module(
File "/opt/conda/envs/*/lib/python3.10/site-packages/torch/jit/_recursive.py", line 555, in create_script_module
concrete_type = get_module_concrete_type(nn_module, share_types)
File "/opt/conda/envs/*/lib/python3.10/site-packages/torch/jit/_recursive.py", line 504, in get_module_concrete_type
concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
File "/opt/conda/envs/*/lib/python3.10/site-packages/torch/jit/_recursive.py", line 436, in get_or_create_concrete_type
concrete_type_builder = infer_concrete_type_builder(nn_module)
File "/opt/conda/envs/*/lib/python3.10/site-packages/torch/jit/_recursive.py", line 396, in infer_concrete_type_builder
attr_type, inferred = infer_type(name, value)
File "/opt/conda/envs/*/lib/python3.10/site-packages/torch/jit/_recursive.py", line 228, in infer_type
ann_to_type = torch.jit.annotations.ann_to_type(
File "/opt/conda/envs/*/lib/python3.10/site-packages/torch/jit/annotations.py", line 516, in ann_to_type
raise ValueError(f"Unknown type annotation: '{ann}' at {loc.highlight()}")
ValueError: Unknown type annotation: 'Callable[..., Any]' at
I tried to find the reason. When name is` _torchdynamo_orig_callable`, the error occurs.
torch/jit/_recursive.py
```
def infer_concrete_type_builder(nn_module, share_types=True):
...
for name, value in nn_module.__dict__.items():
...
attr_type, inferred = infer_type(name, value)
```
And I printed the __dict__.items() of `model` and `trt_model`, and found that there is no `_torchdynamo_orig_callable` in `model`, but there is in `trt_model`. I don’t know what to do next.
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Alibaba Group Enterprise Linux Server 7.2 (Paladin) (x86_64)
GCC version: (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3 2.17)
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.32
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.9.151-015.ali3000.alios7.x86_64-x86_64-with-glibc2.32
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: A10-1-PCIE-24GB-XGB-V
Nvidia driver version: 470.82.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 3499.859
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] lion-pytorch==0.2.2
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==1.9.5
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.1
[pip3] torch_no_python==2.5.0.dev20240816+cu121
[pip3] torch_tensorrt==2.5.0.dev20240816+cu121
[pip3] torchao==0.7.0+git75f52ae7
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.4.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] lion-pytorch 0.2.2 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torch-no-python 2.5.0.dev20240816+cu121 pypi_0 pypi
[conda] torch-tensorrt 2.5.0.dev20240816+cu121 pypi_0 pypi
[conda] torchao 0.7.0+git75f52ae7 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,759,467,427
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu"
] | 4
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [214f33](https://github.com/intel/torch-xpu-ops/commit/214f33b9d969930a18656a82b5c5d8da53cdcb8e), includes:
- Fix building issue for transformer related operators
- Improve XPU operator coverage
| true
|
2,759,457,585
|
[CPU][Operator] one channel_shuffle test of the operator benchmark has a Performance fluctuation issue
|
LifengWang
|
open
|
[
"needs reproduction",
"module: performance",
"module: nn",
"triaged"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I conducted the operator benchmark and found one channel_shuffle test of the operator benchmark has a performance fluctuation issue. The test is benchmarkchannel_shuffle_batch_size4_channels_per_group64_height64_width64_groups4_channel_lastTrue.
Set up the test environment according to the [Operator Micro-benchmarks README](https://github.com/pytorch/pytorch/tree/main/benchmarks/operator_benchmark) and conducted the following commands 10 times.
```
taskset -c 0-23 python -m pt.channel_shuffle_test --test-name channel_shuffle_batch_size4_channels_per_group64_height64_width64_groups4_channel_lastTrue
```
Here are the test results from my environment. We can clearly observe significant performance fluctuations in the test logs for rounds 3 and 9.
```
channel_shuffle_test_round_10.log:Forward Execution Time (us) : 120.766
channel_shuffle_test_round_10.log:Forward Execution Time (us) : 119.556
channel_shuffle_test_round_1.log:Forward Execution Time (us) : 120.853
channel_shuffle_test_round_1.log:Forward Execution Time (us) : 119.538
channel_shuffle_test_round_2.log:Forward Execution Time (us) : 117.764
channel_shuffle_test_round_2.log:Forward Execution Time (us) : 117.233
channel_shuffle_test_round_3.log:Forward Execution Time (us) : 789.170
channel_shuffle_test_round_3.log:Forward Execution Time (us) : 118.370
channel_shuffle_test_round_4.log:Forward Execution Time (us) : 118.316
channel_shuffle_test_round_4.log:Forward Execution Time (us) : 117.791
channel_shuffle_test_round_5.log:Forward Execution Time (us) : 118.098
channel_shuffle_test_round_5.log:Forward Execution Time (us) : 120.020
channel_shuffle_test_round_6.log:Forward Execution Time (us) : 118.721
channel_shuffle_test_round_6.log:Forward Execution Time (us) : 117.861
channel_shuffle_test_round_7.log:Forward Execution Time (us) : 119.729
channel_shuffle_test_round_7.log:Forward Execution Time (us) : 119.001
channel_shuffle_test_round_8.log:Forward Execution Time (us) : 119.005
channel_shuffle_test_round_8.log:Forward Execution Time (us) : 117.391
channel_shuffle_test_round_9.log:Forward Execution Time (us) : 858.333
channel_shuffle_test_round_9.log:Forward Execution Time (us) : 117.732
```
### Versions
Versions
```
PyTorch version: 2.6.0.dev20241224+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.9.21 | packaged by conda-forge | (main, Dec 5 2024, 13:51:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-192-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 3400.001
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 2.6 MiB
L1i cache: 1.8 MiB
L2 cache: 70 MiB
L3 cache: 84 MiB
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Vulnerable, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0.dev20241224+cpu
[pip3] torchaudio==2.6.0.dev20241224+cpu
[pip3] torchvision==0.22.0.dev20241224+cpu
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.6.0.dev20241224+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241224+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20241224+cpu pypi_0 pypi
```
cc @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,759,446,630
|
[CI] Disable sccache for xpu test
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
WA for https://github.com/pytorch/pytorch/issues/143585
| true
|
2,759,412,541
|
[WIP] [Inductor][CPP] Support Group GEMM Epilogue Fusion
|
leslie-fang-intel
|
closed
|
[
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143850
* #143820
* #143796
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,759,384,359
|
Refine CUDA Stream priority
|
guangyey
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: improvements"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143849
* #143799
* #141123
* #141119
* #142347
# Motivation
As mentioned in https://github.com/pytorch/pytorch/pull/141119#discussion_r1897480515, we properly handle the priority value if it is outside of the priority range.
# Additional Context
If the value falls outside of the allowed priority range, it will automatically be mapped to the nearest valid priority(either lowest or highest).
| true
|
2,759,286,265
|
[Inductor][CPU] Fix C++ compile error of torch.max on bool type
|
blzheng
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143848
Fix https://github.com/pytorch/pytorch/issues/143568
Before:

After:

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,759,239,903
|
memory_format=torch.preserve_format doesn't apply to tensors with strides of zero
|
EmmettBicker
|
closed
|
[
"triage review",
"module: python frontend"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The memory_format=torch.preserve_format seems to ignore tensors with a 0 stride somewhere, like in this following example. I don't know if this is intentional or not, but I wanted to bring it up in case it wasn't!
```py
import torch
arg = torch.randn([2,1]).expand(2,2)
print(arg.stride()) # (1, 0)
print(arg.clone(memory_format=torch.preserve_format).stride()) # (2, 1)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060 with Max-Q Design
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 4900HS with Radeon Graphics
CPU family: 23
Model: 96
Thread(s) per core: 2
Core(s) per socket: 3
Socket(s): 1
Stepping: 1
BogoMIPS: 5988.75
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip rdpid
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 96 KiB (3 instances)
L1i cache: 96 KiB (3 instances)
L2 cache: 1.5 MiB (3 instances)
L3 cache: 4 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1+cu124
[pip3] torchaudio==2.5.1+cu124
[pip3] torchvision==0.20.1+cu124
```
cc @albanD
| true
|
2,759,233,264
|
Check F2C BLAS for OpenBLAS and other vendors
|
isuruf
|
open
|
[
"open source",
"release notes: build"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143846
This issue came from https://github.com/conda-forge/pytorch-cpu-feedstock/issues/180. MKL follows the F2C convention for returning single precision floats as doubles and uses the G77 convention for returning complex valued scalars. OpenBLAS does the opposite. There is a check for this already, but it's done only when the Generic BLAS vendor code path is used and this PR moves that code to `Dependencies.cmake` to make it work when the BLAS vendor is OpenBLAS and others
| true
|
2,759,200,450
|
[Inductor][lowering] support out_dtype for dequant lowering
|
Valentine233
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
In lowering, support the parameter `out_dtype` for `dequant_per_tensor` and `dequant_per_channel`.
Fix the following runtime error issue found in https://github.com/pytorch/ao/pull/1372:
```
File "/home/liaoxuan/pytorch_ao/torch/_inductor/lowering.py", line 452, in wrapped
out = decomp_fn(*args, **kwargs)
torch._dynamo.exc.BackendCompilerFailed: backend='compile_fx_wrapper' raised:
LoweringException: TypeError: quantized_decomposed_dequantize_per_tensor_default() got an unexpected keyword argument 'out_dtype'
target: quantized_decomposed.dequantize_per_tensor.default
args[0]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cpu', torch.uint8, size=[1, 7, 7, 9], stride=[441, 63, 9, 1]))
))
args[1]: 0.01
args[2]: 100
args[3]: 0
args[4]: 255
args[5]: torch.uint8
kwargs: {'out_dtype': torch.bfloat16}
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,759,148,084
|
Bump jinja2 from 3.1.4 to 3.1.5 in /.ci/docker
|
dependabot[bot]
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"dependency issue",
"python"
] | 4
|
CONTRIBUTOR
|
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.4 to 3.1.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/releases">jinja2's releases</a>.</em></p>
<blockquote>
<h2>3.1.5</h2>
<p>This is the Jinja 3.1.5 security fix release, which fixes security issues and bugs but does not otherwise change behavior and should not result in breaking changes compared to the latest feature release.</p>
<p>PyPI: <a href="https://pypi.org/project/Jinja2/3.1.5/">https://pypi.org/project/Jinja2/3.1.5/</a>
Changes: <a href="https://jinja.palletsprojects.com/changes/#version-3-1-5">https://jinja.palletsprojects.com/changes/#version-3-1-5</a>
Milestone: <a href="https://github.com/pallets/jinja/milestone/16?closed=1">https://github.com/pallets/jinja/milestone/16?closed=1</a></p>
<ul>
<li>The sandboxed environment handles indirect calls to <code>str.format</code>, such as by passing a stored reference to a filter that calls its argument. <a href="https://github.com/pallets/jinja/security/advisories/GHSA-q2x7-8rv6-6q7h">GHSA-q2x7-8rv6-6q7h</a></li>
<li>Escape template name before formatting it into error messages, to avoid issues with names that contain f-string syntax. <a href="https://redirect.github.com/pallets/jinja/issues/1792">#1792</a>, <a href="https://github.com/pallets/jinja/security/advisories/GHSA-gmj6-6f8f-6699">GHSA-gmj6-6f8f-6699</a></li>
<li>Sandbox does not allow <code>clear</code> and <code>pop</code> on known mutable sequence types. <a href="https://redirect.github.com/pallets/jinja/issues/2032">#2032</a></li>
<li>Calling sync <code>render</code> for an async template uses <code>asyncio.run</code>. <a href="https://redirect.github.com/pallets/jinja/issues/1952">#1952</a></li>
<li>Avoid unclosed <code>auto_aiter</code> warnings. <a href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>Return an <code>aclose</code>-able <code>AsyncGenerator</code> from <code>Template.generate_async</code>. <a href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>Avoid leaving <code>root_render_func()</code> unclosed in <code>Template.generate_async</code>. <a href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>Avoid leaving async generators unclosed in blocks, includes and extends. <a href="https://redirect.github.com/pallets/jinja/issues/1960">#1960</a></li>
<li>The runtime uses the correct <code>concat</code> function for the current environment when calling block references. <a href="https://redirect.github.com/pallets/jinja/issues/1701">#1701</a></li>
<li>Make <code>|unique</code> async-aware, allowing it to be used after another async-aware filter. <a href="https://redirect.github.com/pallets/jinja/issues/1781">#1781</a></li>
<li><code>|int</code> filter handles <code>OverflowError</code> from scientific notation. <a href="https://redirect.github.com/pallets/jinja/issues/1921">#1921</a></li>
<li>Make compiling deterministic for tuple unpacking in a <code>{% set ... %}</code> call. <a href="https://redirect.github.com/pallets/jinja/issues/2021">#2021</a></li>
<li>Fix dunder protocol (<code>copy</code>/<code>pickle</code>/etc) interaction with <code>Undefined</code> objects. <a href="https://redirect.github.com/pallets/jinja/issues/2025">#2025</a></li>
<li>Fix <code>copy</code>/<code>pickle</code> support for the internal <code>missing</code> object. <a href="https://redirect.github.com/pallets/jinja/issues/2027">#2027</a></li>
<li><code>Environment.overlay(enable_async)</code> is applied correctly. <a href="https://redirect.github.com/pallets/jinja/issues/2061">#2061</a></li>
<li>The error message from <code>FileSystemLoader</code> includes the paths that were searched. <a href="https://redirect.github.com/pallets/jinja/issues/1661">#1661</a></li>
<li><code>PackageLoader</code> shows a clearer error message when the package does not contain the templates directory. <a href="https://redirect.github.com/pallets/jinja/issues/1705">#1705</a></li>
<li>Improve annotations for methods returning copies. <a href="https://redirect.github.com/pallets/jinja/issues/1880">#1880</a></li>
<li><code>urlize</code> does not add <code>mailto:</code> to values like <code>@a@b</code>. <a href="https://redirect.github.com/pallets/jinja/issues/1870">#1870</a></li>
<li>Tests decorated with <code>@pass_context</code> can be used with the <code>|select</code> filter. <a href="https://redirect.github.com/pallets/jinja/issues/1624">#1624</a></li>
<li>Using <code>set</code> for multiple assignment (<code>a, b = 1, 2</code>) does not fail when the target is a namespace attribute. <a href="https://redirect.github.com/pallets/jinja/issues/1413">#1413</a></li>
<li>Using <code>set</code> in all branches of <code>{% if %}{% elif %}{% else %}</code> blocks does not cause the variable to be considered initially undefined. <a href="https://redirect.github.com/pallets/jinja/issues/1253">#1253</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/blob/main/CHANGES.rst">jinja2's changelog</a>.</em></p>
<blockquote>
<h2>Version 3.1.5</h2>
<p>Released 2024-12-21</p>
<ul>
<li>The sandboxed environment handles indirect calls to <code>str.format</code>, such as
by passing a stored reference to a filter that calls its argument.
:ghsa:<code>q2x7-8rv6-6q7h</code></li>
<li>Escape template name before formatting it into error messages, to avoid
issues with names that contain f-string syntax.
:issue:<code>1792</code>, :ghsa:<code>gmj6-6f8f-6699</code></li>
<li>Sandbox does not allow <code>clear</code> and <code>pop</code> on known mutable sequence
types. :issue:<code>2032</code></li>
<li>Calling sync <code>render</code> for an async template uses <code>asyncio.run</code>.
:pr:<code>1952</code></li>
<li>Avoid unclosed <code>auto_aiter</code> warnings. :pr:<code>1960</code></li>
<li>Return an <code>aclose</code>-able <code>AsyncGenerator</code> from
<code>Template.generate_async</code>. :pr:<code>1960</code></li>
<li>Avoid leaving <code>root_render_func()</code> unclosed in
<code>Template.generate_async</code>. :pr:<code>1960</code></li>
<li>Avoid leaving async generators unclosed in blocks, includes and extends.
:pr:<code>1960</code></li>
<li>The runtime uses the correct <code>concat</code> function for the current environment
when calling block references. :issue:<code>1701</code></li>
<li>Make <code>|unique</code> async-aware, allowing it to be used after another
async-aware filter. :issue:<code>1781</code></li>
<li><code>|int</code> filter handles <code>OverflowError</code> from scientific notation.
:issue:<code>1921</code></li>
<li>Make compiling deterministic for tuple unpacking in a <code>{% set ... %}</code>
call. :issue:<code>2021</code></li>
<li>Fix dunder protocol (<code>copy</code>/<code>pickle</code>/etc) interaction with <code>Undefined</code>
objects. :issue:<code>2025</code></li>
<li>Fix <code>copy</code>/<code>pickle</code> support for the internal <code>missing</code> object.
:issue:<code>2027</code></li>
<li><code>Environment.overlay(enable_async)</code> is applied correctly. :pr:<code>2061</code></li>
<li>The error message from <code>FileSystemLoader</code> includes the paths that were
searched. :issue:<code>1661</code></li>
<li><code>PackageLoader</code> shows a clearer error message when the package does not
contain the templates directory. :issue:<code>1705</code></li>
<li>Improve annotations for methods returning copies. :pr:<code>1880</code></li>
<li><code>urlize</code> does not add <code>mailto:</code> to values like <code>@a@b</code>. :pr:<code>1870</code></li>
<li>Tests decorated with <code>@pass_context`` can be used with the ``|select`` filter. :issue:</code>1624`</li>
<li>Using <code>set</code> for multiple assignment (<code>a, b = 1, 2</code>) does not fail when the
target is a namespace attribute. :issue:<code>1413</code></li>
<li>Using <code>set</code> in all branches of <code>{% if %}{% elif %}{% else %}</code> blocks
does not cause the variable to be considered initially undefined.
:issue:<code>1253</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pallets/jinja/commit/877f6e51be8e1765b06d911cfaa9033775f051d1"><code>877f6e5</code></a> release version 3.1.5</li>
<li><a href="https://github.com/pallets/jinja/commit/8d588592653b052f957b720e1fc93196e06f207f"><code>8d58859</code></a> remove test pypi</li>
<li><a href="https://github.com/pallets/jinja/commit/eda8fe86fd716dfce24910294e9f1fc81fbc740c"><code>eda8fe8</code></a> update dev dependencies</li>
<li><a href="https://github.com/pallets/jinja/commit/c8fdce1e0333f1122b244b03a48535fdd7b03d91"><code>c8fdce1</code></a> Fix bug involving calling set on a template parameter within all branches of ...</li>
<li><a href="https://github.com/pallets/jinja/commit/66587ce989e5a478e0bb165371fa2b9d42b7040f"><code>66587ce</code></a> Fix bug where set would sometimes fail within if</li>
<li><a href="https://github.com/pallets/jinja/commit/fbc3a696c729d177340cc089531de7e2e5b6f065"><code>fbc3a69</code></a> Add support for namespaces in tuple parsing (<a href="https://redirect.github.com/pallets/jinja/issues/1664">#1664</a>)</li>
<li><a href="https://github.com/pallets/jinja/commit/b8f4831d41e6a7cb5c40d42f074ffd92d2daccfc"><code>b8f4831</code></a> more comments about nsref assignment</li>
<li><a href="https://github.com/pallets/jinja/commit/ee832194cd9f55f75e5a51359b709d535efe957f"><code>ee83219</code></a> Add support for namespaces in tuple assignment</li>
<li><a href="https://github.com/pallets/jinja/commit/1d55cddbb28e433779511f28f13a2d8c4ec45826"><code>1d55cdd</code></a> Triple quotes in docs (<a href="https://redirect.github.com/pallets/jinja/issues/2064">#2064</a>)</li>
<li><a href="https://github.com/pallets/jinja/commit/8a8eafc6b992ba177f1d3dd483f8465f18a11116"><code>8a8eafc</code></a> edit block assignment section</li>
<li>Additional commits viewable in <a href="https://github.com/pallets/jinja/compare/3.1.4...3.1.5">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/pytorch/pytorch/network/alerts).
</details>
| true
|
2,759,146,299
|
[Submodule] Bump libfmt to 11.1.0
|
cyyever
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,759,145,849
|
subgraph rewriter supports matched pattern with no users
|
YangQun1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: fx",
"fx"
] | 8
|
CONTRIBUTOR
|
Fixes #143841
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
2,759,143,372
|
Subgraph rewriter failed when the matched pattern has no users
|
YangQun1
|
closed
|
[
"oncall: pt2",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
The subgraph rewriter will throw an error "The returning_nodes should have at least one user node", when the matched pattern has no users in the original graph.
Can reproduce with below example
```python
class M(torch.nn.Module):
def forward(self, x, y, cache):
m = torch.mul(x, y)
n = cache.index_copy(0, torch.tensor([0]), m)
p = torch.ops.aten.copy.default(cache, n)
q = torch.ops.aten.copy_.default(cache, p)
u = torch.relu(cache)
return u # check the result to ensure cache is updated before relu op
def pattern(self_tensor, src_tensor):
p = torch.ops.aten.copy.default(self_tensor, src_tensor)
q = torch.ops.aten.copy_.default(self_tensor, p)
return q
def replacement(self_tensor, src_tensor):
q = torch.ops.aten.copy_.default(self_tensor, src_tensor)
return q
def comparison(x, y, cache):
m = torch.mul(x, y)
n = cache.index_copy(0, torch.tensor([0]), m)
q = torch.ops.aten.copy_.default(cache, n)
u = torch.relu(cache)
return u
traced = symbolic_trace(M())
print(traced)
comparison_fn = symbolic_trace(comparison)
print(comparison_fn)
subgraph_rewriter.replace_pattern(traced, pattern, replacement)
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 4389.68
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==8.7.0.84
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.20.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.0.0+45fff310c8
[pip3] torch==2.6.0a0+gitf6cd540
[pip3] torchaudio==2.2.0.dev20240429+cu118
[pip3] torchvision==0.20.0.dev20240726+cpu
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,759,123,651
|
FlexAttention `create_block_mask` contains a CUDA sync
|
moinnadeem
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
NONE
|
### 🐛 Describe the bug
I am trying to capture our model forward pass into a CUDA graph, but Flex Attention's `create_block_mask` contains a graph break.
I'm honestly not sure if this is a "bug" or a "feature request".
I have tested `create_block_mask` both with and without `_compile=True` and it happens in both cases.
Relevant stack trace:
```
RuntimeError: called a synchronizing CUDA operation
While executing %setitem : [num_users=0] = call_function[target=operator.setitem](args = (%dense_mask_2, (%row_indi
ces, %valid_indices), 1), kwargs = {})
Original traceback:
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 893, in c
reate_block_mask
block_mask = _create_sparse_block_from_block_mask(
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 765, in _
create_sparse_block_from_block_mask
return BlockMask.from_kv_blocks(
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 353, in f
rom_kv_blocks
q_num_blocks, q_indices = _transpose_ordered(kv_num_blocks, kv_indices)
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 187, in _
transpose_ordered
dense = _ordered_to_dense(num_blocks_in_row, col_indices)
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 172, in _
ordered_to_dense
out = create_dense_batched(num_blocks_in_row, col_indices)
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/envs/main-env/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 165, in c
reate_dense_one
dense_mask[row_indices, valid_indices] = 1
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241211+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 30
On-line CPU(s) list: 0-29
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 30
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7J13 64-Core Processor
Stepping: 1
CPU MHz: 2449.998
BogoMIPS: 4899.99
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.9 MiB
L1i cache: 1.9 MiB
L2 cache: 15 MiB
L3 cache: 480 MiB
NUMA node0 CPU(s): 0-29
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq
ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_ad
just bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip pku ospke vaes vpclmulqdq rdpid fsrm arch_capabilities
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-dlprof-pytorch-nvtx==1.8.0
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] pytorch-ignite==0.5.0.post2
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241211+cu126
[pip3] torch-stoi==0.2.3
[pip3] torchaudio==2.5.0.dev20241211+cu126
[pip3] torchcde==0.2.5
[pip3] torchcfm==1.0.6
[pip3] torchdiffeq==0.2.0
[pip3] torchdyn==1.0.6
[pip3] torchmetrics==1.5.2
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0.dev20241211+cu126
[pip3] triton==3.1.0
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] libopenvino-pytorch-frontend 2024.3.0 he02047a_0 conda-forge
[conda] mkl 2022.1.0 hc2b9512_224
[conda] numpy 1.26.4 py311h64a7726_0 conda-forge
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-dlprof-pytorch-nvtx 1.8.0 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-ignite 0.5.0.post2 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241211+cu126 pypi_0 pypi
[conda] torch-stoi 0.2.3 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241211+cu126 pypi_0 pypi
[conda] torchcde 0.2.5 pypi_0 pypi
[conda] torchcfm 1.0.6 pypi_0 pypi
[conda] torchdiffeq 0.2.0 pypi_0 pypi
[conda] torchdyn 1.0.6 pypi_0 pypi
[conda] torchmetrics 1.5.2 pyhe5570ce_0 conda-forge
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchtriton 3.1.0 py311 pytorch
[conda] torchvision 0.20.0.dev20241211+cu126 pypi_0 pypi
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,759,055,014
|
[CD] Remove redundant triton dependency for xpu wheels
|
chuanqi129
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 8
|
COLLABORATOR
|
Due to XPU CD wheels enabled pypi dependencies by https://github.com/pytorch/pytorch/pull/141135, so the PYTORCH_EXTRA_INSTALL_REQUIREMENTS has value for XPU CD wheel build.
Works for https://github.com/pytorch/pytorch/issues/139722 and https://github.com/pytorch/pytorch/issues/114850
Fixes #143838
| true
|
2,759,054,047
|
PyTorch XPU 2.6 RC wheel has multiple triton dependencies
|
chuanqi129
|
closed
|
[
"triaged",
"module: xpu"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
Currently, the xpu CD linux wheels have multiple triton pypi packages dependencies, which depends on `triton` and `pytorch-triton-xpu`, refer
```
$ pip install torch==2.6 --index-url https://download.pytorch.org/whl/test/xpu
Looking in indexes: https://download.pytorch.org/whl/test/xpu
Collecting torch==2.6
Using cached https://download.pytorch.org/whl/test/xpu/torch-2.6.0%2Bxpu-cp310-cp310-linux_x86_64.whl (1027.1 MB)
Collecting filelock (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/filelock-3.13.1-py3-none-any.whl (11 kB)
Collecting typing-extensions>=4.10.0 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting sympy==1.13.1 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/sympy-1.13.1-py3-none-any.whl (6.2 MB)
Collecting networkx (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/networkx-3.3-py3-none-any.whl (1.7 MB)
Collecting jinja2 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/Jinja2-3.1.4-py3-none-any.whl (133 kB)
Collecting fsspec (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/fsspec-2024.6.1-py3-none-any.whl (177 kB)
Collecting intel-cmplr-lib-rt==2025.0.2 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/xpu/intel_cmplr_lib_rt-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (45.9 MB)
Collecting intel-cmplr-lib-ur==2025.0.2 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/xpu/intel_cmplr_lib_ur-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (25.1 MB)
Collecting intel-cmplr-lic-rt==2025.0.2 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/xpu/intel_cmplr_lic_rt-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (18 kB)
Collecting intel-sycl-rt==2025.0.2 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/xpu/intel_sycl_rt-2025.0.2-py2.py3-none-manylinux_2_28_x86_64.whl (12.4 MB)
Collecting tcmlib==1.2.0 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/xpu/tcmlib-1.2.0-py2.py3-none-manylinux_2_28_x86_64.whl (4.2 MB)
Collecting umf==0.9.1 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/xpu/umf-0.9.1-py2.py3-none-manylinux_2_28_x86_64.whl (161 kB)
Collecting intel-pti==0.10.0 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/xpu/intel_pti-0.10.0-py2.py3-none-manylinux_2_28_x86_64.whl (651 kB)
Collecting triton==3.2.0 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/triton-3.2.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (163.3 MB)
Collecting pytorch-triton-xpu==3.2.0 (from torch==2.6)
Using cached https://download.pytorch.org/whl/test/pytorch_triton_xpu-3.2.0-cp310-cp310-linux_x86_64.whl (348.4 MB)
Collecting packaging (from pytorch-triton-xpu==3.2.0->torch==2.6)
Using cached https://download.pytorch.org/whl/test/packaging-22.0-py3-none-any.whl (42 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy==1.13.1->torch==2.6)
Using cached https://download.pytorch.org/whl/test/mpmath-1.3.0-py3-none-any.whl (536 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.6)
Using cached https://download.pytorch.org/whl/test/MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Installing collected packages: triton, tcmlib, mpmath, intel-pti, intel-cmplr-lic-rt, intel-cmplr-lib-rt, umf, typing-extensions, sympy, packaging, networkx, MarkupSafe, fsspec, filelock, pytorch-triton-xpu, jinja2, intel-cmplr-lib-ur, intel-sycl-rt, torch
Successfully installed MarkupSafe-2.1.5 filelock-3.13.1 fsspec-2024.6.1 intel-cmplr-lib-rt-2025.0.2 intel-cmplr-lib-ur-2025.0.2 intel-cmplr-lic-rt-2025.0.2 intel-pti-0.10.0 intel-sycl-rt-2025.0.2 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 packaging-22.0 pytorch-triton-xpu-3.2.0 sympy-1.13.1 tcmlib-1.2.0 torch-2.6.0+xpu triton-3.2.0 typing-extensions-4.12.2 umf-0.9.1
```
### Versions
pytorch 2.6.0 and latest main
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,758,895,058
|
[BE]: Update mypy to 1.14.0
|
Skylion007
|
closed
|
[
"open source",
"Stale",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Updates mypy to the latest and greatest
| true
|
2,758,891,270
|
Integration of AdamCPR Optimizer into PyTorch
|
ZiadHelal
|
open
|
[
"module: optimizer",
"triaged"
] | 1
|
NONE
|
### 🚀 The feature, motivation and pitch
# Proposal: Integration of AdamCPR Optimizer into PyTorch
**Authors:**
- @ZiadHelal
## **Summary**
We propose the integration of AdamCPR, a novel deep learning optimizer developed at the University of Freiburg, into PyTorch's core optimizer library. AdamCPR builds upon the widely adopted AdamW (also originating from our lab) and introduces Constrained Parameter Regularization (CPR) to improve optimization dynamics and generalization. CPR enforces adaptive and individualized regularization constraints across parameter matrices, requiring minimal hyperparameter tuning while outperforming AdamW on diverse tasks such as language modeling, image classification, and medical image segmentation.
For details, see our paper: [Improving Deep Learning Optimization through Constrained Parameter Regularization](https://arxiv.org/abs/2311.09058).
## **Motivation**
AdamCPR addresses key limitations of uniform regularization in traditional optimizers, offering:
- **Dynamic Regularization:** CPR adapts the penalty strength during training, eliminating the need for manual weight decay scheduling.
- **Improved Performance:** Demonstrated gains in multiple benchmarks, including CIFAR100 (+1.5% accuracy), ImageNet (+2-3% accuracy on DeiT models), and GPT-2 pretraining (33% reduced training time to achieve comparable perplexity).
- **Wide Applicability:** Suitable for diverse tasks, including fine-tuning large-scale models and training robust classifiers in noisy settings.
### Experimental Highlights
1. **GPT-2 Pretraining:** CPR achieved the same perplexity as AdamW with **33% fewer training steps**, translating into significant computational savings.
2. **Image Classification:** CPR outperformed AdamW in training ResNet18 on CIFAR100 with +1.5% accuracy and DeiT-Small on ImageNet with +2% top-1 accuracy.
3. **Medical Image Segmentation:** CPR improved Dice scores in tasks such as Brain Tumor Segmentation (+0.43%) and Multi-Atlas Labeling (+0.24%) compared to SGD with weight decay.
### Addressing Concerns
1. While AdamCPR is implemented in our [lab's GitHub repository](https://github.com/automl/CPR) and PyTorch encourages the exploration of optimizers in third-party libraries, we believe AdamCPR merits inclusion in the core library due to its foundational improvements, broad applicability, and lineage from AdamW, which is a widely used and trusted optimizer in PyTorch. Integrating AdamCPR would provide the community with a robust, efficient, and ready-to-use tool, fostering adoption and reducing the need for users to implement or maintain custom solutions.
2. CPR has been tested extensively across diverse domains, achieving consistent performance gains with minimal to zero hyperparameter tuning.
3. Our lab has pioneered impactful contributions like AdamW, and AdamCPR continues this trajectory, representing the cutting edge of optimizer research.
## **Proposed Implementation**
AdamCPR builds on PyTorch’s existing optimizer framework, ensuring compatibility and ease of integration:
1. **Single Tensor & Multi (foreach) Tensor Implementation :** This is already implemented in our repo.
4. **Fused Implementation (Planned):** Targeted for CUDA optimization, offering significant speedup in large-scale deployments.
## **Metrics**
- Performance improvement over AdamW on diverse tasks:
- CIFAR100: +1.5% accuracy on ResNet18.
- ImageNet: +2-3% accuracy on DeiT models.
- GPT-2: 33% reduction in training budget for equivalent perplexity.
- Reduction in hyperparameter tuning effort (e.g., weight decay).
- Computational efficiency compared to baseline optimizers (runtime increase <6%).
## **Drawbacks**
1. **Runtime Overhead:** Minor increase in training time (~0.5–5% in most settings) due to additional computations for constraints.
We look forward to feedback from the PyTorch team, specifically Optimizers maintainers @janeyx99 @albanD @jbschlosser.
### Alternatives
_No response_
### Additional context
_No response_
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
| true
|
2,758,860,352
|
[inductor] Simplify get_launch_args_* handling
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/inductor-rocm"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143835
* #143818
* #143817
* #143815
* #143814
* #143813
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,758,847,815
|
Copy trans fixl2 miss
|
coderfeli
|
closed
|
[
"oncall: distributed",
"module: rocm",
"release notes: releng",
"module: inductor"
] | 2
|
NONE
|
Fixes #ISSUE_NUMBER
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,758,846,948
|
[1/N]Add Intel GPU Support to Torch Test Cases
|
daisyden
|
closed
|
[
"triaged",
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/xpu",
"ci-no-td"
] | 7
|
NONE
|
As the first step to https://github.com/pytorch/pytorch/issues/142029:
- Define device checkers in common_utils.py to facilitate test generalization, for example GPU_TYPE for current available gpu device.
- Define dtypesIfGPU and backward_dtypesIfGPU in OpInfo
- Use GPU_TYPE, dtypesIfGPU and backward_dtypesIfGPU to make op_db general for GPU devices.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,758,831,533
|
flex_attention: OutOfResources
|
rmmr
|
closed
|
[
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 1
|
NONE
|
### 🐛 Describe the bug
Not sure if my expectations are wrong, but this should just work?
```
import torch
from torch.nn.attention.flex_attention import flex_attention
torch.compiler.reset()
flex_attention = torch.compile(flex_attention)
torch.manual_seed(1)
x = torch.rand(1, 1, 32, 256).to(device="cuda")
flex_attention(x, x, x)
```
- dim `128` still works
- but `256` and above all fail.
- `torch.nn.functional.scaled_dot_product_attention` simply works for any dim `256 - 16384`
### Raises
```
BackendCompilerFailed: backend='inductor' raised:
OutOfResources: out of resource: shared memory, Required: 106496, Hardware limit: 101376. Reducing block sizes or `num_stages` may help.
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.6.0.dev20241225+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X3D 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 5758.5928
CPU min MHz: 3000.0000
BogoMIPS: 8384.48
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 128 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241225+cu124
[pip3] torch-model-archiver==0.12.0
[pip3] torchmetrics==1.6.0
[pip3] torchserve==0.12.0
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,758,781,540
|
After pth is converted into ptl, the prediction result is very different from pth
|
lizhiwen19900709
|
open
|
[
"oncall: jit"
] | 1
|
NONE
|
### 🐛 Describe the bug
# 加载原始配置和模型
checkpoint = torch.load(checkpoint_path, map_location='cuda')
args = checkpoint['args']
args.num_classes = 250
# 构建模型
model, _, _ = build_model(args)
model.load_state_dict(checkpoint['model'])
model.eval()
wrapped_model = DETRWrapper(model)
original_model = wrapped_model # 保存原始模型的引用
# 准备示例输入
example_input = torch.randn(1, 3, 448, 448)
# 测试前向传播
with torch.no_grad():
try:
semantic_map = wrapped_model(example_input)
print("Model test forward pass successful")
print(f"Output shape: semantic_map {semantic_map.shape}")
except Exception as e:
print(f"Error during test forward pass: {e}")
return
try:
# 使用TorchScript跟踪模型
traced_model = torch.jit.trace(wrapped_model, example_input)
# 确保模型在CPU上
traced_model = traced_model.cpu()
# 添加详细的验证步骤
def validate_outputs(pth_model, ptl_model, test_input):
with torch.no_grad():
pth_output = pth_model(test_input)
ptl_output = ptl_model(test_input)
# 确保输出类型一致
if pth_output.dtype != ptl_output.dtype:
print(f"Warning: Output dtype mismatch - PTH: {pth_output.dtype}, PTL: {ptl_output.dtype}")
# 比较预测结果
match_percentage = (pth_output == ptl_output).float().mean() * 100
print(f"Prediction match percentage: {match_percentage:.2f}%")
# 检查类别分布
pth_classes = torch.unique(pth_output, sorted=True)
ptl_classes = torch.unique(ptl_output, sorted=True)
print(f"PTH unique classes: {pth_classes}")
print(f"PTL unique classes: {ptl_classes}")
return match_percentage > 95 # 要求95%以上的预测匹配
# 在保存模型前进行验证
if not validate_outputs(wrapped_model, traced_model, example_input):
print("Warning: Model conversion validation failed!")
return
# 保存模型
traced_model.save(output_path)
### Versions
2.1.0+cu118
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
2,758,751,093
|
[Doc] Add `weight` and `bias` attributes to RMSNorm and GroupNorm
|
gau-nernst
|
closed
|
[
"triaged",
"open source",
"Stale"
] | 3
|
NONE
|
I noticed RMSNorm doc doesn't mention about `weight` and `bias` attributes like LayerNorm does, so I adds it here. While adding that, I saw GroupNorm also didn't have such attribute doc, so I added it too.
New rendered text
Class | Doc
------|------
RMSNorm | <img width="656" alt="image" src="https://github.com/user-attachments/assets/3937323d-9137-4067-b283-320a50a653ba" />
GroupNorm | <img width="759" alt="image" src="https://github.com/user-attachments/assets/6213aad2-e928-446e-96bd-e799bc83f7ad" />
| true
|
2,758,717,385
|
[DCP]Distributed checkpoint `set_optimizer_state_dict` cause optimizer step error when optimizer contains empty param group
|
FindDefinition
|
closed
|
[
"oncall: distributed",
"module: optimizer",
"triaged"
] | 9
|
NONE
|
### 🐛 Describe the bug
DCP `set_optimizer_state_dict` introduce wrong param group and cause `optim.step` raise error when original state dict contains param group that doesn't have any parameters.
* Error Message
```
[rank1]: Traceback (most recent call last):
[rank1]: File "/path/to/pytorch_bug/dcp_bug.py", line 45, in <module>
[rank1]: optim_new.step()
[rank1]: File "/opt/miniconda/envs/torchtitan/lib/python3.11/site-packages/torch/optim/optimizer.py", line 493, in wrapper
[rank1]: out = func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/miniconda/envs/torchtitan/lib/python3.11/site-packages/torch/optim/optimizer.py", line 91, in _use_grad
[rank1]: ret = func(self, *args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/miniconda/envs/torchtitan/lib/python3.11/site-packages/torch/optim/adamw.py", line 230, in step
[rank1]: beta1, beta2 = cast(Tuple[float, float], group["betas"])
[rank1]: ~~~~~^^^^^^^^^
[rank1]: KeyError: 'betas'
```
* Code
`torchrun --nnodes=1 --nproc-per-node=4 --standalone /path/to/pytorch_bug/dcp_bug.py`
```Python
import torch
import torch.distributed.checkpoint as dcp
from torch.distributed.checkpoint.state_dict import get_optimizer_state_dict, set_optimizer_state_dict
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.tensor.parallel import (
parallelize_module,
ColwiseParallel,
)
from torch.distributed.tensor import Shard, DTensor, Replicate
import os
_world_size = int(os.environ["WORLD_SIZE"])
device_mesh = init_device_mesh(device_type="cuda", mesh_shape=(_world_size,))
class TestMod(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc = torch.nn.Linear(64, 64)
def forward(self, x):
return self.fc(x)
mod = TestMod().cuda()
parallelize_module(mod, device_mesh, {
"fc": ColwiseParallel(use_local_output=False)
})
optim = torch.optim.AdamW([
{"params": mod.parameters()},
{"params": [], "lr": 0.2}, # empty pg group here
], lr=0.1)
optim_new = torch.optim.AdamW([
{"params": mod.parameters()},
{"params": [], "lr": 0.2}, # empty pg group here
], lr=0.1)
# init optimizer state
sample_inp = torch.randn(2, 128, 64).cuda()
sample_target = torch.randn(2, 128, 64).cuda()
loss_cls = torch.nn.MSELoss()
optim.zero_grad()
output = mod(sample_inp).redistribute(device_mesh, [Replicate()]).to_local()
loss = loss_cls(output, sample_target)
loss.backward()
optim.step()
# bug
optim_state_dict = get_optimizer_state_dict(mod, optim)
set_optimizer_state_dict(mod, optim_new, optim_state_dict)
optim_new.step()
```
### Versions
```
PyTorch version: 2.6.0.dev20241222+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.210-4-velinux1-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
Nvidia driver version: 535.86.10
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241222+cu124
[pip3] torchaudio==2.6.0.dev20241222+cu124
[pip3] torchdata==0.9.0
[pip3] torchpippy==0.2.0+1bcb2bf
[pip3] torchtitan==0.0.2
[pip3] torchvision==0.22.0.dev20241222+cu124
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.4.127 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.5.147 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.11 py311h5eee18b_0
[conda] mkl_random 1.2.8 py311ha02d727_0
[conda] numpy 2.1.3 py311h08b1b3b_0
[conda] numpy-base 2.1.3 py311hf175353_0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241222+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241222+cu124 pypi_0 pypi
[conda] torchdata 0.9.0 pypi_0 pypi
[conda] torchpippy 0.2.0+1bcb2bf pypi_0 pypi
[conda] torchtitan 0.0.2 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241222+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar @LucasLLC @pradeepfn
| true
|
2,758,695,846
|
XPU PyTorch 2.6 WARNING: hwloc library not found in /tcm/latest/lib
|
ekaakurniawan
|
closed
|
[
"triaged",
"module: xpu"
] | 3
|
NONE
|
### 🐛 Describe the bug
When setting up UMF environment variables, I get the following warning. It is due to ONEAPI_ROOT is never set.
```
$ source /opt/intel/oneapi/umf/0.9/env/vars.sh
WARNING: hwloc library not found in /tcm/latest/lib
```
I need to run oneAPI setup variables to clear the warning. Please help to verify.
```
source /opt/intel/oneapi/setvars.sh
```
Steps I follow are from this link.
https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpu/2-6.html
__Set Up Intel Deep Learning Environment Variables__
```
source /opt/intel/oneapi/compiler/2025.0/env/vars.sh
source /opt/intel/oneapi/umf/0.9/env/vars.sh
source /opt/intel/oneapi/pti/0.10/env/vars.sh
```
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 9 285K
CPU family: 6
Model: 198
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 24
Stepping: 2
CPU(s) scaling MHz: 27%
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 7372.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni lam wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 768 KiB (20 instances)
L1i cache: 1.3 MiB (20 instances)
L2 cache: 40 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] pytorch-triton-xpu==3.2.0
[pip3] torch==2.6.0+xpu
[pip3] torchaudio==2.6.0+xpu
[pip3] torchvision==0.21.0+xpu
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,758,685,432
|
[don't merge] build cpu via vs2022 (test diff)
|
xuhancn
|
closed
|
[
"open source",
"ciflow/binaries",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
| true
|
2,758,684,671
|
Tensor.item() blocks cudaLaunchKernel on other threads.
|
li-yi-dong
|
closed
|
[] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Tensor.item() using `cudaMemcpyAsync` which triggers a Memcpy DtoH (Device -> Pageable). It seems that this kind of `cudaMemcpyAsync` would block any other `cudaLaunchKernel`, even on other thread.

I'm trying to implement overlapping between model forward with data preparation. This kind of behavior hurts the performance and even causing deadlock.
Why `.item()` method must use pageable host memory? Is there any way to work around?
### Versions
Collecting environment information...
PyTorch version: 2.1.2
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.10.135.bsk.6-amd64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: Intel(R) Xeon(R) Platinum 8457C
BIOS Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Versions of relevant libraries:
[pip3] cudnn==1.1.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] optree==0.11.0
[pip3] pynvjitlink==0.1.13
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==2.1.0+e6216047b8
[pip3] torch==2.1.2
[pip3] torchaudio==2.1.2+cu121
[pip3] torchdata==0.7.1a0
[pip3] torchtext==0.17.0a0
[pip3] torchtyping==0.1.5
[pip3] torchvision==0.16.2+cu121
[pip3] triton==2.1.0
[pip3] tritonclient==2.50.0
[conda] Could not collect
| true
|
2,758,643,409
|
[inductor][cpu] AMP/FP32 single thread performance regression in 2024-12-23 nightly release
|
zxd1997066
|
open
|
[
"needs reproduction",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 6
|
CONTRIBUTOR
|
### 🐛 Describe the bug
<p>AMP static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>pyhpc_isoneutral_mixing</td>
<td>single</td>
<td>1</td>
<td>23.99195</td>
<td>0.0001438</td>
<td>0.00345004241</td>
<td>46.329881</td>
<td>1</td>
<td>28.984441</td>
<td>0.00012082</td>
<td>0.00350190016162</td>
<td>45.62457</td>
<td>0.83</td>
<td>1.02</td>
<td>0.84</td>
<td>0.98</td>
</tr>
<tr>
<td>torchbench</td>
<td>lennard_jones</td>
<td>single</td>
<td>1</td>
<td>3.080003</td>
<td>8.8519e-05</td>
<td>0.000272638785557</td>
<td>38.040263</td>
<td>1</td>
<td>3.433204</td>
<td>7.742e-05</td>
<td>0.00026579865367999997</td>
<td>38.885432</td>
<td>0.9</td>
<td>0.97</td>
<td>0.87</td>
<td>1.02</td>
</tr>
</tbody>
</table>
<p>AMP dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>pyhpc_isoneutral_mixing</td>
<td>single</td>
<td>1</td>
<td>30.10916</td>
<td>5.8161999999999996e-05</td>
<td>0.0017512089639199998</td>
<td>12.582083</td>
<td>1</td>
<td>38.434939</td>
<td>4.9305e-05</td>
<td>0.001895034667395</td>
<td>12.577097</td>
<td>0.78</td>
<td>1.08</td>
<td>0.85</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>AMP static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>pyhpc_isoneutral_mixing</td>
<td>single</td>
<td>1</td>
<td>29.729494</td>
<td>5.8566e-05</td>
<td>0.001741137545604</td>
<td>12.629471</td>
<td>1</td>
<td>34.628374</td>
<td>5.1991000000000004e-05</td>
<td>0.0018003637926340002</td>
<td>12.62295</td>
<td>0.86</td>
<td>1.03</td>
<td>0.89</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>AMP dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>pyhpc_isoneutral_mixing</td>
<td>single</td>
<td>1</td>
<td>29.646898</td>
<td>0.000106697</td>
<td>0.003163235075906</td>
<td>30.752565</td>
<td>1</td>
<td>37.355747</td>
<td>8.9253e-05</td>
<td>0.003334112486991</td>
<td>30.668563</td>
<td>0.79</td>
<td>1.05</td>
<td>0.84</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p>FP32 static shape CPP wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>lennard_jones</td>
<td>single</td>
<td>1</td>
<td>1.379054</td>
<td>5.8948e-05</td>
<td>8.1292475192e-05</td>
<td>7.819986</td>
<td>1</td>
<td>1.597033</td>
<td>5.0586e-05</td>
<td>8.0787511338e-05</td>
<td>7.780064</td>
<td>0.86</td>
<td>0.99</td>
<td>0.86</td>
<td>0.99</td>
</tr>
<tr>
<td>torchbench</td>
<td>pyhpc_equation_of_state</td>
<td>single</td>
<td>1</td>
<td>19.078431</td>
<td>6.2443e-05</td>
<td>0.0011913144669329998</td>
<td>11.053214</td>
<td>1</td>
<td>22.690209</td>
<td>5.2470000000000004e-05</td>
<td>0.0011905552662300001</td>
<td>10.964946</td>
<td>0.84</td>
<td>1.0</td>
<td>0.84</td>
<td>0.99</td>
</tr>
<tr>
<td>torchbench</td>
<td>pyhpc_isoneutral_mixing</td>
<td>single</td>
<td>1</td>
<td>40.936272</td>
<td>7.7847e-05</td>
<td>0.0031867659663840004</td>
<td>13.349267</td>
<td>1</td>
<td>47.610703</td>
<td>6.633e-05</td>
<td>0.00315801792999</td>
<td>13.237678</td>
<td>0.86</td>
<td>0.99</td>
<td>0.85</td>
<td>0.99</td>
</tr>
</tbody>
</table>
the last good commit: c04f0bb7b9537758e1e5c956ebcb20e153ef9544
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench pyhpc_isoneutral_mixing amp
Testing with inductor.
single-thread testing....
loading model: 0it [00:00, ?it/s]
cpu eval pyhpc_isoneutral_mixing
running benchmark: 100%|█████████████████████████████████████████████████████| 50/50 [00:00<00:00, 484.21it/s]
35.332x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,pyhpc_isoneutral_mixing,1,35.332336,0.049990,11.903519,0.795666,40.422605,50.803507,746,1,0,0,0,0,1
```
the bad commit: 18261e9f39580989b5902b6b70f6a8371372c5c8
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench pyhpc_isoneutral_mixing amp
Testing with inductor.
single-thread testing....
loading model: 0it [00:00, ?it/s]
cpu eval pyhpc_isoneutral_mixing
running benchmark: 100%|█████████████████████████████████████████████████████| 50/50 [00:00<00:00, 492.06it/s]
30.152x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,pyhpc_isoneutral_mixing,1,30.152358,0.057168,7.816028,0.808176,40.422605,50.017075,746,1,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>766a5e3a</td>
<td>main</td>
<td>766a5e3a</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>f1cbf4b1b5a299f999c11e77bfabe39c7f04efdc</td>
<td>main</td>
<td>dd2d360b7d5dcc66660fdfe8da083a7077dada56</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
<td>main</td>
<td>2.5.0a0+265bc5c</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh single inference performance torchbench pyhpc_isoneutral_mixing amp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/18261e9f39580989b5902b6b70f6a8371372c5c8
[torchbench-pyhpc_isoneutral_mixing-inference-amp-static-default-single-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/18244801/torchbench-pyhpc_isoneutral_mixing-inference-amp-static-default-single-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @chuanqi129
| true
|
2,758,605,243
|
Possible race condition found in TailLogTest.test_tail
|
cdzhan
|
open
|
[
"oncall: distributed",
"module: tests",
"module: elastic"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
### Error message
```bash
Time: 12/20/2024 10:05:37, Level: 40000, Log: Traceback (most recent call last):
File "/opt/py3.10/lib/python3.10/unittest/case.py", line 59, in testPartExecutor
yield
File "/opt/py3.10/lib/python3.10/unittest/case.py", line 591, in run
self._callTestMethod(testMethod)
File "/opt/py3.10/lib/python3.10/unittest/case.py", line 549, in _callTestMethod
method()
File "/torch/src/pytorch/test/distributed/elastic/multiprocessing/tail_log_test.py", line 83, in test_tail
self.assertEqual(
File "/opt/py3.10/lib/python3.10/unittest/case.py", line 845, in assertEqual
assertion_func(first, second, msg=msg)
File "/opt/py3.10/lib/python3.10/unittest/case.py", line 1144, in assertDictEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/py3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: {'[writer0]': {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1[156927 chars]999}} != {'[writer1]': {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1[156922 chars]999}}
Diff is 722560 characters long. Set self.maxDiff to None to see it.
```
### Possible root cause
Some woker threads of `TailLog` might open their log files later than when they receive the stop signal? It's difficult to reproduce, and I have only encountered it once.
### Versions
main
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @mruberry @ZainRizvi @dzhulgakov
| true
|
2,758,557,847
|
[Inductor][CPP][CPU] Fix floating point exception error during division/mod
|
maybeLee
|
closed
|
[
"triaged",
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 4
|
CONTRIBUTOR
|
Fixes #143649
This PR fixes the floating point exception in four operators: torch.floor_divide, torch.remainder, torch.fmod, torch.divide.
Before this PR, when both `a` and `b` are integer tensors and `b=0`:
| API | Eager Mode | torch.compile mode |
| -------- | ------- | ------- |
| torch.floor_divide(a,b) | RuntimeError("ZeroDivisionError") | FPE (core dumped) |
| torch.remainder(a,b) | RuntimeError("ZeroDivisionError") | FPE (core dumped) |
| torch.fmod(a,b) | RuntimeError("ZeroDivisionError") | FPE (core dumped) |
| torch.divide(a,b, rounding_mode='trunc') | RuntimeError("ZeroDivisionError") | FPE (core dumped) |
After this PR, when both `a` and `b` are integer tensors and `b=0`:
| API | Eager Mode | torch.compile mode |
| -------- | ------- | ------- |
| torch.floor_divide(a,b) | RuntimeError("ZeroDivisionError") | RuntimeError("ZeroDivisionError") |
| torch.remainder(a,b) | RuntimeError("ZeroDivisionError") | RuntimeError("ZeroDivisionError") |
| torch.fmod(a,b) | RuntimeError("ZeroDivisionError") | RuntimeError("ZeroDivisionError") |
| torch.divide(a,b, rounding_mode='trunc') | RuntimeError("ZeroDivisionError") | RuntimeError("ZeroDivisionError") |
### Test Plan
```
pytest -s -v test/inductor/test_cpu_select_algorithm.py -k test_cpu_integer_div_by_zero
```
Additionally, I wrote a toy script to check that the CPU inference efficiency of these four operators is not influenced very much. Therefore, I assume adding these checkers is affordable.
| API | Before this PR | After this PR |
| -------- | ------- | ------- |
| torch.floor_divide(a,b) | 3.168026606241862e-05 | 3.1781593958536786e-05 |
| torch.fmod(a,b) | 3.8297573725382486e-05 | 3.777662913004557e-05 |
| torch.remainder(a,b) | 4.244565963745117e-05 | 4.1649738947550455e-05 |
| torch.divide(a,b, rounding_mode='trunc') | 4.503051439921061e-05 | 4.452188809712728e-05 |
<details>
<summary>Detailed Code For Measuring The Efficiency</summary>
```
import time
import torch
import numpy as np
np.random.seed(2024)
op_list = [torch.floor_divide, torch.fmod, torch.remainder, torch.divide]
cop_list = [torch.compile(f) for f in op_list]
dtype_list = [torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]
# cold start
for cop in cop_list:
for dtype in dtype_list:
value = torch.tensor(np.random.randn(1,2,3), dtype=dtype)
value[value == 0] = 1
divisor = torch.tensor(np.random.randn(1,2,3), dtype=dtype)
divisor[divisor == 0] = 1
try:
res = cop(value, divisor)
except RuntimeError as e:
pass
for op, cop in zip(op_list, cop_list):
print(f"Benchmarking {op.__name__}")
inference_time_list = []
for dtype in dtype_list:
for i in range(100):
value = torch.tensor(np.random.randn(1,2,3), dtype=dtype)
value[value == 0] = 1
divisor = torch.tensor(np.random.randn(1,2,3), dtype=dtype)
divisor[divisor == 0] = 1
start = time.time()
try:
res = cop(value, divisor)
except RuntimeError as e:
pass
inference_time_list.append(time.time() - start)
print(f"Average inference time: {np.mean(inference_time_list)}")
print(f"Max inference time: {np.max(inference_time_list)}")
print(f"Min inference time: {np.min(inference_time_list)}")
```
</details>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.