id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,766,846,545
|
cpp_wrapper AOTI: Precompile device-specific header files
|
benjaminglass1
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144124
* #144123
* #144002
* #143909
* #143421
* #143223
* #141371
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,846,515
|
cpp_wrapper AOTI: Move #includes to per-device header files
|
benjaminglass1
|
closed
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144124
* __->__ #144123
* #144002
* #143909
* #143421
* #143223
* #141371
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,841,755
|
[MPSInductor][EZ] Fix logical_[or|end] ops
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144084
* #144083
* #144050
* #144105
* __->__ #144122
* #144051
* #144055
For boolean operands it does not really matter whether `&` or `&&` is
used, but if one ever to rely on operator precedence, then bitwise ops
should have higher precendence than logical ones
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,835,218
|
[mps/inductor] Add support for atanh().
|
dcci
|
closed
|
[
"Merged",
"ciflow/trunk",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
MEMBER
|
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,829,107
|
[Submodule] Turning flash-attention integration into 3rd party submod
|
drisspg
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks",
"ciflow/inductor",
"suppress-bc-linter",
"ciflow/rocm",
"ci-no-td",
"module: sdpa"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144120
# Summary
### Sticky points
Cuda-graph rng handling has changed / deviated from original implementation. We will be left with a dangling 'offset' val and confusing naming due to BC
## Dependencies
- Flash PR: https://github.com/Dao-AILab/flash-attention/pull/1419
### Other Points
- The BC linter is complaining about losing generate.py and its functions which is not real BC surface
cc @albanD
Differential Revision: [D68502879](https://our.internmc.facebook.com/intern/diff/D68502879)
| true
|
2,766,829,071
|
working
|
drisspg
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144120
* __->__ #144119
| true
|
2,766,820,801
|
Migrate the rest of CUDA 12.1 jobs to 12.4
|
huydhn
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/periodic",
"ciflow/inductor-periodic"
] | 4
|
CONTRIBUTOR
|
CUDA 12.4 is the default now and we don't build nightly 12.1 anymore, so it's time to move the rest of CI jobs to 12.4. I also clean up some redundant CI jobs on periodic and inductor-periodic.
| true
|
2,766,819,823
|
Multihead Attention with mask producing float32 spontaneously, somehow compile cache related
|
IlanCosman
|
closed
|
[
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
Here is a minimal reproducer. It compiles but produces a warning about float32 usage.
```
UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
```
```python
import torch
import torch._inductor.config
from torch import Tensor, nn
torch.set_default_dtype(torch.bfloat16)
# Set this to True to produce the bug just the first time you run the script
# Set this to False to always reproduce the bug
torch._inductor.config.fx_graph_cache = True
batch = 2
seq = 3
features = 4
data = torch.randn(batch, seq, features).cuda()
mask = torch.randn(batch, seq).cuda() < 0
mask_per_seq = mask.unsqueeze(1).expand(-1, seq, -1) # (batch, seq, seq)
@torch.compile
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.mha = nn.MultiheadAttention(
embed_dim=features, num_heads=1, batch_first=True
)
def forward(self, x: Tensor, attn_mask: Tensor) -> Tensor:
return self.mha(x, x, x, attn_mask=attn_mask, need_weights=False)[0]
model = MyModule().cuda()
print(model(data, mask_per_seq))
```
This warning shouldn't appear since everything ought to be in bfloat16.
Passing an `attn_mask` is what seems to cause the problem. `need_weights` and `batch_first` etc. don't matter.
Finally, there is some sort of caching component, where the first run will produce the warning, and subsequent runs will not. If one turns off the cache, either from the command line or in the code, the warning will appear every time.
### Versions
Arch Linux
Python 3.12.7
Pytorch 2.5.1+cu124 (also reproduced on nightly 2.6.0.dev20241231+cu126)
cc @chauhang @penguinwu
| true
|
2,766,819,123
|
RNN batch_first argument only works on the input not h_0 when if should work on both
|
jsyoo61
|
open
|
[
"module: nn",
"module: rnn",
"triaged"
] | 4
|
NONE
|
### 🚀 The feature, motivation and pitch
Hi,
the RNN batch_first argument only works on the input not h_0 when if should work on both. This applies to all 3 RNN implementations ([RNN](https://pytorch.org/docs/stable/generated/torch.nn.RNN.html), [GRU](https://pytorch.org/docs/stable/generated/torch.nn.GRU.html#torch.nn.GRU), [LSTM](https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html#torch.nn.LSTM))

[Currently]
input shape:
(L, H_in) for unbatched input
(L, N, H_in) when `batch_first=False` for batched input
(N, L, H_in) when `batch_first=False` for batched input
h_0 shape:
(D*num_layers, H_out) for unbatched input
(D*num_layers, N, H_out) regardless of `batch_first` argument
[Desired]
input shape:
(L, H_in) for unbatched input
(L, N, H_in) when `batch_first=False` for batched input
(N, L, H_in) when `batch_first=False` for batched input
h_0 shape:
(D*num_layers, H_out) for unbatched input
(D*num_layers, N, H_out) when `batch_first=False` for batched input
(N, D*num_layers, H_out) when `batch_first=True` for batched input
Thank you.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,766,810,271
|
[compiled autograd] support Tensor Subclasses in AOTBackward
|
zou3519
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: composability",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"module: compiled autograd",
"ci-no-td"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144115
* #143417
* #143405
* #143387
* #143304
* #143296
Compiled autograd's initial trace traces through the AOTBackward
epilogue. The Tensor Subclass code is not traceable. This PR changes it
so that when we see Tensor Subclass constructors, we proxy nodes for
their construction into the graph.
Test Plan:
- New basic test with TwoTensor
- Existing tests
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @xmfan
| true
|
2,766,810,238
|
[ca] add test_dtensor_compile.py to compiled autograd tests
|
zou3519
|
closed
|
[
"oncall: distributed",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144115
* __->__ #144114
* #143417
* #143405
* #143387
* #143304
* #143296
This is just #144107, I put it here because ghstack with multiple users
is weird.
| true
|
2,766,800,403
|
[cpu/sorting] Throw an error when trying to sort complex numbers.
|
dcci
|
closed
|
[
"module: sorting and selection",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend"
] | 5
|
MEMBER
|
It doesn't really make sense to sort complex numbers as they are not comparable.
Fixes #129296
| true
|
2,766,798,835
|
Use the build environment as sccache prefix instead of workflow name
|
huydhn
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
This is an attempt to improve cache usage for jobs in non-pull workflows like periodic, slow, or inductor as we are seeing build timeout there from time to time, for example https://github.com/pytorch/pytorch/actions/runs/12553928804. The build timeout never happens in pull or trunk AFAICT because they are more up to date with the cache content coming from the PR itself.
Logically, the same build should use the same cache regardless of the workflows. We have many examples where the same build, for example [linux-focal-cuda12.4-py3.10-gcc9-sm86](https://github.com/search?q=repo%3Apytorch%2Fpytorch+linux-focal-cuda12.4-py3.10-gcc9-sm86&type=code), is split between different workflows and, thus, uses different caches.
I could gather some sccache stats from CH in the meantime to try to prove the improvement before and after this lands.
| true
|
2,766,796,036
|
Use c10 version of half/bfloat16 in executorch
|
swolchok
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: build",
"topic: not user facing"
] | 9
|
CONTRIBUTOR
|
Summary:
X-link: https://github.com/pytorch/executorch/pull/7040
Accomplished by importing relevant files from c10 into
executorch/runtime/core/portable_type/c10, and then using `using` in
the top-level ExecuTorch headers. This approach should keep the
ExecuTorch build hermetic for embedded use cases. In the future, we
should add a CI job to ensure the c10 files stay identical to the
PyTorch ones.
ghstack-source-id: 260047850
exported-using-ghexport
Test Plan: builds
Differential Revision: D66106969
| true
|
2,766,780,923
|
[typing] Add type hints to `@property` and `@lazy_property` in `torch.distributions`.
|
randolf-scholz
|
closed
|
[
"module: distributions",
"module: typing",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"suppress-bc-linter"
] | 8
|
CONTRIBUTOR
|
Fixes #76772, #144196
Extends #144106
- added type annotations to `lazy_property`.
- added type annotation to all `@property` and `@lazy_property` inside `torch.distributions` module.
- added simply type-check unit test to ensure type inference is working.
- replaced deprecated annotations like `typing.List` with the corresponding counterpart.
- simplified `torch.Tensor` hints with plain `Tensor`, otherwise signatures can become very verbose.
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @malfet @xuzhao9 @gramster
| true
|
2,766,774,566
|
Uneven Sharding in DTensor Leads to unexpected tensor resolution with `full_tensor`
|
coreyjadams
|
open
|
[
"oncall: distributed",
"triaged",
"actionable",
"module: dtensor"
] | 4
|
NONE
|
### 🐛 Describe the bug
Appears related at least to #143372. tl;dr: DTensor `full_tensor` operations are incorrect if sharding is not even AND if sharding isn't implicitly matching the uneven sharding that DTensor expects.
I've recently hit this issue - uneven sharding of `DTensor` leads to unexpected results. As a toy example, a 1D device mesh of size 4 with the following shapes:
- [2,2] on device 0
- [2,3] on device 1
- [2,4] on device 2
- [2,5] on device 3
Each tensor is created with torch.arange(5+rank) + 0.1*rank (to give uneven sizes and to make the decimal place match the rank).
This would have a global shape of (2, 14) when sharding with `placements=Shard(1)`, and strides as (14,1). It's possible to pass the shape and stride to `DTensor.from_local` without any complaints from pytorch (even with `run_check=True`). Visualizing the tensor gives
```
Col 0-3 Col 4-7 Col 8-11 Col 12-13
------- --------- --------- ---------- -----------
Row 0-1 cuda:0 cuda:1 cuda:2 cuda:3
```
Whereas the correct placement (without any data movement) would be
```
Col 0-1 Col 2-5 Col 6-9 Col 10-14
------- --------- ---------- ----------- -----------
Row 0-1 cuda:0 cuda:1 cuda:2 cuda:3
```
When I call `dtensor.full_tensor()`, it comes together with incorrect results:
```
Rank 0 has global tensor: tensor([[0.0000, 1.0000, 0.0000, 0.0000, 0.1000, 1.1000, 2.1000, 0.0000, 0.2000,
1.2000, 2.2000, 3.2000, 0.3000, 1.3000],
[2.0000, 3.0000, 0.0000, 0.0000, 3.1000, 4.1000, 5.1000, 0.0000, 4.2000,
5.2000, 6.2000, 7.2000, 5.3000, 6.3000]], device='cuda:0') of shape torch.Size([2, 14])
```
To my best estimate, this is using the size that dtensor _expects_ for sharding based on dividing the sharded axis (14) by the number of shards (4): sizes (on axis 1) come out to 4, 4, 4, 2. Looks like the operation that happens is creating zero tensors, writing the incoming buffers into the right locations from the right start point, and truncating extra input.
Instead, I would expect (if the global size is correct) the output to be correctly concatenated across dimensions. Similar to this operation (which works below):
```
# Correct implementation:
size_list = [0,]*domain_size
# Gather the sizes:
dist.all_gather_object(size_list, local_chunk.shape)
# Create buffers:
output_tensor = [torch.empty(s, device=local_chunk.device) for s in size_list]
# Gather up:
dist.all_gather(output_tensor, local_chunk, group = domain_group)
#Concat:
output_tensor = torch.cat(output_tensor, dim=1)
if rank == 0:
print(f"Correct output tensor: {output_tensor}")
```
Coming to the point of all of this:
- BUG: The DTensor implementation should raise an error if using `from_local` with `run_check=True`, if the local sharding does not match the sharding DTensor is expecting.
- Feature request: For uneven shardings, `full_tensor` should properly gather and concatenate tensor shards based on actual shapes and not DTensor's implicit shapes.
**Why does this matter / What's the use case?** I'm exploring techniques to use DTensor for domain parallelism on very very very large input data, which relatively small models. Sharding can start even for regular data (images, volumes) or uneven for irregular (graphs) but some operations - even on regular data - will produce output that does not distribute evenly. Even convolutions, with a reasonable choice of kernel and stride and padding, will do this on reasonably shaped images.
Full Reproducer is here:
```
import torch
torch.manual_seed(1234)
import torch.distributed as dist
from torch.distributed import tensor as dist_tensor
from torch.distributed.tensor import debug
if __name__ == "__main__":
mesh_shape = [4,]
mesh_dim_names = ["domain"]
mesh = dist.device_mesh.init_device_mesh(
"cuda",
mesh_shape,
mesh_dim_names=mesh_dim_names,
)
# Access the rank easily through the manager:
rank = dist.get_rank()
# Make the chunks uneven and with different but predictable values:
local_chunk = torch.arange(2*(2 + rank), dtype=torch.float32) + 0.1*rank
# To make this example not complelely trivial, we have 2D tensors split along one axis
local_chunk = local_chunk.reshape((2,-1)).cuda().contiguous()
local_chunk = local_chunk.to(f"cuda:{rank}")
# Create the mesh, per usual:
domain_mesh = mesh["domain"]
# First, slice the local input based on domain mesh rank:
domain_group = domain_mesh.get_group()
domain_rank = dist.get_group_rank(domain_group, rank)
domain_size = len(dist.get_process_group_ranks(domain_group))
shape = (2, 14)
stride = (14, 1)
# Now, we can create the dtensor properly:
dtensor = dist_tensor.DTensor.from_local(
local_chunk,
device_mesh = domain_mesh,
placements = (dist_tensor.Shard(1),),
run_check=True,
shape=shape,
stride=stride
)
print(dist_tensor.Shard._local_shard_size_on_dim(26, 4, 0))
print(f"Rank {rank} has {dtensor}\n")
debug.visualize_sharding(dtensor)
full = dtensor.full_tensor()
if rank == 0:
print(f"Rank {rank} has global tensor: {full} of shape {full.shape}\n")
# Correct implementation:
size_list = [0,]*domain_size
# Gather the sizes:
dist.all_gather_object(size_list, local_chunk.shape)
# Create buffers:
output_tensor = [torch.empty(s, device=local_chunk.device) for s in size_list]
# Gather up:
dist.all_gather(output_tensor, local_chunk, group = domain_group)
#Concat:
output_tensor = torch.cat(output_tensor, dim=1)
if rank == 0:
print(f"Correct output tensor: {output_tensor}")
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1032-oracle-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-254
Off-line CPU(s) list: 255
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7J13 64-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2550.000
CPU max MHz: 3673.0950
CPU min MHz: 1500.0000
BogoMIPS: 4900.16
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-254
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] nvtx==0.2.10
[pip3] onnx==1.17.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.10.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.0.1 py312hc5e2394_1
[conda] numpy-base 2.0.1 py312h0da6c21_1
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchdata 0.10.0 pypi_0 pypi
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,766,740,777
|
[inductor] Add type annotations to _inductor/utils.py
|
rec
|
closed
|
[
"module: typing",
"open source",
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 17
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144108
cc @ezyang @malfet @xuzhao9 @gramster @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,714,007
|
[ca] add test_dtensor_compile.py to compiled autograd tests
|
xmfan
|
closed
|
[
"oncall: distributed",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144107
more than half the tests use autograd, pass rate 19/26
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,704,282
|
added type hints to `lazy_property`
|
randolf-scholz
|
closed
|
[
"module: distributions",
"module: typing",
"open source"
] | 4
|
CONTRIBUTOR
|
Partial fix for #76772, it remains to add type hints to all the properties of the predefined distribution objects.
EDIT: #144110 builds on top of this PR and provides these type hints.
cc @fritzo @neerajprad @alicanb @nikitaved @ezyang @malfet @xuzhao9 @gramster
| true
|
2,766,647,259
|
[MPSInductor] Add signbit op support
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144084
* #144083
* #144050
* #144156
* __->__ #144105
* #144122
* #144051
* #144055
By mapping it to `metal::signbit`
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,629,233
|
MPS returns 0 for `BCEWithLogitsLoss` on empty tensors while CPU and CUDA return nan
|
dylwil3
|
closed
|
[
"triaged",
"actionable",
"module: mps",
"module: empty tensor"
] | 6
|
NONE
|
### 🐛 Describe the bug
```python
import torch
import torch.nn.functional as F
x = torch.tensor([])
y = torch.tensor([])
loss = F.binary_cross_entropy_with_logits
print(loss(x.to("cpu"),y.to("cpu"))) # tensor(nan)
if torch.cuda.is_available():
print(loss(x.to("cuda"),y.to("cuda"))) # tensor(nan, device='cuda:0')
if torch.backends.mps.is_available():
print(loss(x.to("mps"),y.to("mps"))) # tensor(0., device='mps:0')
```
### Versions
The `collect_env` script failed to run (maybe because I don't use pip?) But let me know if you are unable to reproduce and need more information.
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
| true
|
2,766,627,302
|
Update TorchInductor to support removed AttrsDescriptor in upstream Triton
|
jansel
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: inductor"
] | 6
|
CONTRIBUTOR
|
https://github.com/triton-lang/triton/pull/5512 removed `AttrsDescriptor` which TorchInductor generates in its output code.
To support Triton versions after that PR we will need to update the code we generate.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
| true
|
2,766,598,517
|
allow_in_graph footgun: nested user functions
|
zou3519
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/blob/bb5e439f2d8a46172b8b7d2fdb7609822b9a97b1/torch/_dynamo/decorators.py#L138-L153
allow_in_graph recognizes functions by their Python id. A nested user function might get deallocated and the id reused. This may lead to nondeterministic behavior. These dicts should be weakkeydictionaries.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,766,571,397
|
Clarify what we mean by decoupled weight decay in the *AdamWs
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: docs",
"release notes: optim"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144101
| true
|
2,766,556,815
|
[dtensor] expose the __create_chunk_list__ in the doc
|
wanchaol
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144100
* #144099
as titled, this PR expose this dunder method as a public API in the doc,
so that different checkpoint implementations can leverage this protocol,
instead of exposing a separate API
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,766,556,763
|
[dtensor] improve doc of the DTensor class
|
wanchaol
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (dtensor)"
] | 4
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144100
* __->__ #144099
as titled: explicitly list all public members to make sure the public
API stays consistent, also use groupwise as the member order to make doc
look better
cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,766,517,937
|
[ROCm][Windows] Fix export macros
|
m-gallus
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 4
|
CONTRIBUTOR
|
For correct import and export of functions when the dynamic linkage is used for HIP libraries on windows, the appropriate export/import macros need to be put in place. This Pull Request utilizes existing CUDA import/export macros by converting them to corresponding HIP macros during the hipification process.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,766,509,490
|
partitioner: when materializing unbacked tensor intermediates, apply hint to symbol, not expr
|
bdhirsh
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: composability",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Fixes https://github.com/pytorch/pytorch/issues/144095
open to suggestions: the `hint_int(..., fallback=...)` API feels like a bit of a footgun, because:
(1) we use the same guess for every unbacked symint (both symbols, and compound expressions)
(2) the user may have established some relationship between some unbacked symints that we are not taking into account.
I'm not sure how real of an issue (2) is - is it common to e.g. generate two unbacked symints, and then add a runtime assert that they are unequal?
Instead I did something simpler that's just enough to fix the linked issue: if we have a sympy expression containing an unbacked symbol (e.g. `u0 + 1`), then the partitioner will now fill in the symbol with our guess instead of the expression (plugging in `u0=4096` gets us 4097). This was important for an internal custom op, that had some logic like this:
```
def custom_op(x: [u0], y: [u0 + 1]):
assert x.shape[0] = y.shape[0] - 1
...
```
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144438
* __->__ #144097
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,766,507,672
|
[Release/2.6][MPS] Fix crash on CPU scalars
|
malfet
|
closed
|
[
"release notes: mps",
"ciflow/mps"
] | 1
|
CONTRIBUTOR
|
This cherry-picks following PR into 2.6 branch that fixes crash when fmin/fmax, bucketize or Metal kernels are invoked with CPU tensors
- **[MPS] Fix fmin/fmax for scalar argument (#143934)**
- **[MPS] Handle implicit cpu-scalar-to-gpu transfer (#144055)**
| true
|
2,766,501,141
|
activation memory budget partitioner can fail with unbacked symints
|
bdhirsh
|
closed
|
[
"high priority",
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 2
|
CONTRIBUTOR
|
internal xref: https://fb.workplace.com/groups/1075192433118967/posts/1567692087202330/?comment_id=1572673046704234&reply_comment_id=1577244289580443
Stacktrace below. Still working on a minimal repro, but a few things that become apparent from looking at the [tlparse](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/aps-ig_v4_4t_4_brian_fix_more_log-1f26ffb451/attempt_0/version_0/rank_0/1_0_0/aot_joint_graph_7.txt?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=100&fbclid=IwZXh0bgNhZW0CMTEAAR3PXnExS6X2RbGJVZA4YDlJxOplWmMtBCqTDDkB1F4nLGu1N3aML9CZ-kM_aem_XfnDO_Q9FfE_kYaJbP3IHw):
(1) The partitioner runs some ops in the graph under the FlopCounter during mem budget partitioning, but errors inside one of the meta functions
(2) surprisingly, these meta functions didn't error previously during dynamo / AOT tracing - they only fail later when they are re-run in the partitioner.
Still working on a min repro
```
Traceback (most recent call last):
File "/.../torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/.../torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/.../torch/_utils_internal.py", line 310, in wrapper_function
return func(*args, **kwargs)
File "/.../torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/.../torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/.../torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/.../torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/.../torch/_dynamo/symbolic_convert.py", line 2870, in run
super().run()
File "/.../torch/_dynamo/symbolic_convert.py", line 1053, in run
while self.step():
File "/.../torch/_dynamo/symbolic_convert.py", line 963, in step
self.dispatch_table[inst.opcode](self, inst)
File "/.../torch/_dynamo/symbolic_convert.py", line 3050, in RETURN_VALUE
self._return(inst)
File "/.../torch/_dynamo/symbolic_convert.py", line 3035, in _return
self.output.compile_subgraph(
File "/.../torch/_dynamo/output_graph.py", line 1136, in compile_subgraph
self.compile_and_call_fx_graph(
File "/.../torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/.../torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/.../torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/.../torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/.../torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/.../torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/.../torch/_inductor/compile_fx.py", line 1880, in compile_fx
return aot_autograd(
File "/.../torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/.../torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/.../torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/.../torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/.../torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/.../torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 447, in aot_dispatch_autograd
fw_module, bw_module = aot_config.partition_fn(
File "/.../torch/_inductor/compile_fx.py", line 1797, in partition_fn
return min_cut_rematerialization_partition(
File "/.../torch/_functorch/partitioners.py", line 1823, in min_cut_rematerialization_partition
saved_values = choose_saved_values_set(
File "/.../torch/_functorch/partitioners.py", line 1573, in choose_saved_values_set
runtimes_banned_nodes = [
File "/.../torch/_functorch/partitioners.py", line 1574, in <listcomp>
estimate_runtime(node) for node in all_recomputable_banned_nodes
File "/.../torch/_functorch/partitioners.py", line 1458, in estimate_runtime
node.target(*args, **kwargs)
File "/.../torch/_ops.py", line 722, in __call__
return self._op(*args, **kwargs)
File "/.../torch/utils/flop_counter.py", line 772, in __torch_dispatch__
out = func(*args, **kwargs)
File "/.../torch/_ops.py", line 722, in __call__
return self._op(*args, **kwargs)
File "/.../torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "/.../torch/_subclasses/fake_tensor.py", line 1276, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/.../torch/_subclasses/fake_tensor.py", line 1817, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/.../torch/_subclasses/fake_tensor.py", line 1387, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/.../torch/_subclasses/fake_tensor.py", line 2385, in _dispatch_impl
r = func(*args, **kwargs)
File "/.../torch/_ops.py", line 722, in __call__
return self._op(*args, **kwargs)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError:
Exception raised from <custom_op_meta>
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bobrenjc93 @yf225
| true
|
2,766,484,562
|
[profiler][python 3.13] profiler with_stack=True failing on python 3.13
|
davidberard98
|
open
|
[
"oncall: profiler"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Repro:
```python
import torch
class ModuleA(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x):
return self.linear(x)
class ModuleB(torch.nn.Module):
def __init__(self):
super().__init__()
self.a = ModuleA()
def forward(self, x):
return self.a(x).relu()
mod = ModuleB()
torch.jit.script(mod)
with torch.profiler.profile(with_stack=True, schedule=torch.profiler.schedule(warmup=2, wait=2, active=3, repeat=1)) as prof:
for _ in range(10):
x = torch.rand(4, 4)
mod(x)
prof.step()
prof.export_chrome_trace("torchscript_stack.json")
```
Result: segfault.
### Versions
pytorch commit: f7e621c3ce623996510b87901e729be2138679b2 (dec 11)
A100 build.
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @briancoutinho @sraikund16 @sanrise
| true
|
2,766,475,422
|
remove allow-untyped-defs from _export/db/logging.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: export"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144093
| true
|
2,766,475,341
|
remove allow-untyped-defs from torch/mps/event.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144093
* __->__ #144092
| true
|
2,766,475,264
|
remove allow-untyped-defs from ao/quantization/experimental/fake_quantize.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing",
"release notes: AO frontend"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144093
* #144092
* __->__ #144091
| true
|
2,766,475,160
|
remove allow-untyped-defs from distributed/elastic/utils/data/cycling_iterator.py
|
bobrenjc93
|
closed
|
[
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: distributed (torchelastic)"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144090
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,766,475,065
|
remove allow-untyped-defs from utils/_import_utils.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144089
| true
|
2,766,474,986
|
remove allow-untyped-defs from utils/data/datapipes/iter/streamreader.py
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: dataloader",
"topic: not user facing"
] | 12
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144088
| true
|
2,766,440,615
|
[ROCm][NFC] Fix condition for small tensor tuning
|
doru1004
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda",
"ciflow/rocm"
] | 4
|
CONTRIBUTOR
|
Fix condition for small tensor tuning to not impact non-ROCm compilation.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,766,375,407
|
Fix nan propagation for minimum() and maximum() in MPS
|
jhavukainen
|
closed
|
[
"open source",
"Merged",
"module: mps",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 5
|
COLLABORATOR
|
Fixes #143976
- Moves minimum and maximum operations to use the NaN propagating call into MPSGraph instead of the default one.
- Adds test for the NaN propagating case to `test_mps.py`.
- Adjusts the inductor metal backend implementation for minimum and maximum to also respect the nan propagation.
Additions by @malfet:
- Introduce MPSGraph+PyTorchFixups interface following [Customizing existing classes](https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/ProgrammingWithObjectiveC/CustomizingExistingClasses/CustomizingExistingClasses.html) tutorial and implement `minimumWithNaNPropagationAndIntFallbackWithPrimaryTensor:` as `minimumWithNaNPropagationWithPrimaryTensor:` segfaults when called for integral types
cc @kulinseth @albanD @malfet @DenisVieriu97 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,315,965
|
Added 'Use tensor in PyTorch' section to README
|
guan0612
|
closed
|
[
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Add 'Use tensor in PyTorch'
| true
|
2,766,307,631
|
[MPSInductor] Add `masked` implementation
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144170
* __->__ #144084
* #144083
* #144162
* #144167
More or less borrowed from
https://github.com/pytorch/pytorch/blob/22580f160e9ff6f5a54bc5abd03ba3eb75519e10/torch/_inductor/codegen/halide.py#L549-L563
`pytest test/inductor/test_torchinductor.py -k _mps` score is 408 failed, 347 passed, 32 skipped
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,307,455
|
[MPSInductor] Add `floor_div` and `index_expr` implementation
|
malfet
|
closed
|
[
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144170
* #144084
* __->__ #144083
* #144162
* #144167
Simply copy-n-pasted from CPPInductor
`pytest test/inductor/test_torchinductor.py -k _mps` score is 418 failed, 337 passed, 32 skipped
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,766,267,081
|
Added a usage example to the README
|
nash0220
|
closed
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
NONE
|
This PR adds a simple example of using PyTorch to build a neural network.
| true
|
2,766,238,216
|
[AOTI] Remove more AOTI_TORCH_EXPORT
|
desertfire
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: Similar to https://github.com/pytorch/pytorch/pull/142500, remove redundant AOTI_TORCH_EXPORT from several cpp files, to solve a windows build issue.
Differential Revision: D67762069
| true
|
2,766,231,810
|
Inconsistent `padding_value` rounding decision when using `torch.nn.utils.rnn.pad_sequence` under torch.compile and eager
|
meetmul
|
open
|
[
"module: nn",
"triaged",
"module: type promotion",
"oncall: pt2",
"module: inductor"
] | 0
|
NONE
|
### 🐛 Describe the bug
I think this is caused by the inconsistent type casting between torch.compile and eager.
When `sequences` is a mixed of complex and integer tensors, pad_sequence under torch.compile mode will directly round `padding_value` to 0 but eager mode will keep `padding_value` as -0.7. See below code for details:
```python
import torch
class RNNPadSequence(torch.nn.Module):
def forward(self, sequences, padding_value=0.):
return torch.nn.utils.rnn.pad_sequence(sequences, padding_value=padding_value)
model = RNNPadSequence()
compiled = torch.compile(model)
sequences = [torch.tensor([0, 0.4+0.j]), torch.tensor([0], dtype=torch.int32)]
padding_value = -0.7
print(model(sequences,padding_value=padding_value))
'''
tensor([[ 0.0000+0.j, 0.0000+0.j],
[ 0.4000+0.j, -0.7000+0.j]])
'''
print(compiled(sequences,padding_value=padding_value))
'''
tensor([[0.0000+0.j, 0.0000+0.j],
[0.4000+0.j, 0.0000+0.j]])
'''
```
Here are two interesting details:
1. If I set the sequences to `[torch.tensor([0.4+0.j]), torch.tensor([0], dtype=torch.int32)]`, both mode will round the padding_value to 0 and output:
```
tensor([[0.4000+0.j, 0.0000+0.j]])
```
2. If I set the first tensor of `sequences` to `float` instead of `complex` (for example, set the sequence to `[torch.tensor([0.4]), torch.tensor([0], dtype=torch.int32)]`), eager mode will work as normal but torch.compile mode will raises the following error:
```
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
LoweringException: AssertionError:
target: aten.select_scatter.default
args[0]: TensorBox(StorageBox(
Pointwise(
'cpu',
torch.float32,
def inner_fn(index):
i0, i1 = index
tmp0 = ops.index_expr(i1, torch.int32)
tmp1 = ops.index_expr(0, torch.int32)
tmp2 = tmp0 == tmp1
tmp3 = ops.load(arg0_1, i0)
tmp4 = ops.constant(-0.7, torch.float32)
tmp5 = ops.where(tmp2, tmp3, tmp4)
return tmp5
,
ranges=[2, 2],
origin_node=select_scatter_default,
origins=OrderedSet([select_scatter_default, full_default])
)
))
args[1]: TensorBox(StorageBox(
Pointwise(
'cpu',
torch.int32,
def inner_fn(index):
i0 = index
tmp0 = ops.index_expr(i0, torch.int64)
tmp1 = ops.index_expr(1, torch.int64)
tmp2 = tmp0 < tmp1
tmp3 = ops.load(arg1_1, 0)
tmp4 = ops.masked(tmp2, tmp3, 0)
return tmp4
,
ranges=[2],
origin_node=constant_pad_nd_1,
origins=OrderedSet([constant_pad_nd_1])
)
))
args[2]: 1
args[3]: 1
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @nairbv @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @ezyang @anjali411 @dylanbespalko @nikitaved
| true
|
2,766,092,921
|
Test s390x docker image build
|
AlekseiNikiforovIBM
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Test s390x docker image build
| true
|
2,766,026,893
|
Fix PythonMod printing
|
isuruf
|
closed
|
[
"module: cpu",
"module: regression",
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: dynamo"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144078
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Fixes #144075
| true
|
2,766,026,015
|
broken link at https://pytorch.org/docs/stable/_modules/torch/_tensor.html#Tensor.backward
|
nadeeer
|
closed
|
[] | 1
|
NONE
|
### 📚 The doc issue
I am trying to check the source code for Tensor.backward() and tried to follow the link in the documentation page with no luck.
### Suggest a potential alternative/fix
_No response_
| true
|
2,766,003,518
|
[reland][AMD] Turn on TF32 for aten::mm (#143549)
|
jeanschmidt
|
closed
|
[
"fb-exported",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 124
|
CONTRIBUTOR
|
Summary:
hipblaslt supports TF32, so adding the support.
Original PR https://github.com/pytorch/pytorch/pull/139869
Test Plan: CI
Reviewed By: leitian
Differential Revision: D67431681
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,765,944,123
|
[regression] Incorrect symbolic output shape and guards for arange, avg pool and conv ops
|
BartlomiejStemborowski
|
closed
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When using latest PyTorch 2.6 RC, it looks like the output shape metadata in the compile dynamic flow is incorrect for the arange OP.
I received the following graph, where an output shape is calculated as: (s0 + 1//2) where in PT 2.5 it is: ((s0 + 1)//2).
PT 2.6 graph:
```
TRACED GRAPH
===== __compiled_fn_3 =====
.../site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, L_end_: "Sym(s0)"):
l_end_ = L_end_
# File: .../repro.py:11 in fn, code: return torch.arange(start=0, step=2, end=end, device=device)
arange: "i64[(s0 + 1//2)][1]cpu" = torch.arange(start = 0, step = 2, end = l_end_, device = 'cpu'); l_end_ = None
return (arange,)
```
The issue is observed since this PR has been merged: https://github.com/pytorch/pytorch/pull/140597
Also it was observed for another OPs like AvgPool or Conv.
However, with below reproducer I don't get any explicit error or incorrect results.
Reproducer
```
import torch
import logging
torch._logging.set_logs(dynamo = logging.DEBUG)
torch._dynamo.reset()
device, backend = 'cpu', 'eager'
def fn(end):
return torch.arange(start=0, step=2, end=end, device=device)
fn_cmp = torch.compile(fn, dynamic=None, fullgraph=True, backend=backend)
for end in [7,17,13]:
res = fn_cmp(end)
print(res)
```
CC
@isuruf @anijain2305 @ezyang
### Error logs
I0102 12:18:06.346000 5178 torch/_dynamo/__init__.py:99] torch._dynamo.reset
I0102 12:18:06.350000 5178 torch/_dynamo/__init__.py:132] torch._dynamo.reset_code_caches
I0102 12:18:06.408000 5178 torch/_dynamo/utils.py:1238] [0/0] ChromiumEventLogger initialized with id 3e624b3e-ed04-4797-a991-62568458c660
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] torchdynamo start compiling fn <ipython-input-1-71c43cf3ec21>:9, stack (elided 4 frames):
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] return _run_code(code, main_globals, None,
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] exec(code, run_globals)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py", line 37, in <module>
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] ColabKernelApp.launch_instance()
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py", line 992, in launch_instance
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] app.start()
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py", line 619, in start
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] self.io_loop.start()
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py", line 195, in start
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] self.asyncio_loop.run_forever()
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] self._run_once()
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] handle._run()
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] self._context.run(self._callback, *self._args)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 685, in <lambda>
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] lambda f: self._run_callback(functools.partial(callback, future))
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 738, in _run_callback
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] ret = callback()
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 825, in inner
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] self.ctx_run(self.run)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] yielded = self.gen.send(value)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 361, in process_one
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] yield gen.maybe_future(dispatch(*args))
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] yielded = ctx_run(next, result)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 261, in dispatch_shell
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] yield gen.maybe_future(handler(stream, idents, msg))
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] yielded = ctx_run(next, result)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 539, in execute_request
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] self.do_execute(
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] yielded = ctx_run(next, result)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py", line 302, in do_execute
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] res = shell.run_cell(code, store_history=store_history, silent=silent)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py", line 539, in run_cell
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 2975, in run_cell
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] result = self._run_cell(
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3030, in _run_cell
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] return runner(coro)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py", line 78, in _pseudo_sync_runner
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] coro.send(None)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3257, in run_cell_async
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3473, in run_ast_nodes
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] if (await self.run_code(code, result, async_=asy)):
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] exec(code_obj, self.user_global_ns, self.user_ns)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] File "<ipython-input-1-71c43cf3ec21>", line 15, in <cell line: 14>
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0] res = fn_cmp(end)
V0102 12:18:06.410000 5178 torch/_dynamo/convert_frame.py:950] [0/0]
I0102 12:18:06.415000 5178 torch/_dynamo/symbolic_convert.py:2744] [0/0] Step 1: torchdynamo start tracing fn <ipython-input-1-71c43cf3ec21>:9
I0102 12:18:06.420000 5178 torch/fx/experimental/symbolic_shapes.py:3221] [0/0] create_env
V0102 12:18:06.428000 5178 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] TRACE starts_line <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:06.428000 5178 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] return torch.arange(start=0, step=2, end=end, device=device)
V0102 12:18:06.431000 5178 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0102 12:18:06.435000 5178 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_ATTR arange [PythonModuleVariable(<module 'torch' from '/usr/local/lib/python3.10/dist-packages/torch/__init__.py'>)]
V0102 12:18:06.440000 5178 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_CONST 0 [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>)]
V0102 12:18:06.442000 5178 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_CONST 2 [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0)]
V0102 12:18:06.445000 5178 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_FAST end [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0), ConstantVariable(int: 2)]
V0102 12:18:06.448000 5178 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL device [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0), ConstantVariable(int: 2), LazyVariableTracker()]
V0102 12:18:06.451000 5178 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('start', 'step', 'end', 'device') [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0), ConstantVariable(int: 2), LazyVariableTracker(), ConstantVariable(str: 'cpu')]
V0102 12:18:06.453000 5178 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 4 [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0), ConstantVariable(int: 2), LazyVariableTracker(), ConstantVariable(str: 'cpu'), TupleVariable(length=4)]
V0102 12:18:06.475000 5178 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
I0102 12:18:06.477000 5178 torch/_dynamo/symbolic_convert.py:3066] [0/0] Step 1: torchdynamo done tracing fn (RETURN_VALUE)
V0102 12:18:06.481000 5178 torch/_dynamo/symbolic_convert.py:3070] [0/0] RETURN_VALUE triggered compile
V0102 12:18:06.484000 5178 torch/_dynamo/output_graph.py:979] [0/0] COMPILING GRAPH due to GraphCompileReason(reason='return_value', user_stack=[<FrameSummary file <ipython-input-1-71c43cf3ec21>, line 10 in fn>], graph_break=False)
V0102 12:18:06.494000 5178 torch/_dynamo/output_graph.py:1362] [0/0] [__graph_code] TRACED GRAPH
V0102 12:18:06.494000 5178 torch/_dynamo/output_graph.py:1362] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0102 12:18:06.494000 5178 torch/_dynamo/output_graph.py:1362] [0/0] [__graph_code] /usr/local/lib/python3.10/dist-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0102 12:18:06.494000 5178 torch/_dynamo/output_graph.py:1362] [0/0] [__graph_code] def forward(self):
V0102 12:18:06.494000 5178 torch/_dynamo/output_graph.py:1362] [0/0] [__graph_code] # File: <ipython-input-1-71c43cf3ec21>:10 in fn, code: return torch.arange(start=0, step=2, end=end, device=device)
V0102 12:18:06.494000 5178 torch/_dynamo/output_graph.py:1362] [0/0] [__graph_code] arange: "i64[4][1]cpu" = torch.arange(start = 0, step = 2, end = 7, device = 'cpu')
V0102 12:18:06.494000 5178 torch/_dynamo/output_graph.py:1362] [0/0] [__graph_code] return (arange,)
V0102 12:18:06.494000 5178 torch/_dynamo/output_graph.py:1362] [0/0] [__graph_code]
V0102 12:18:06.494000 5178 torch/_dynamo/output_graph.py:1362] [0/0] [__graph_code]
I0102 12:18:06.497000 5178 torch/_dynamo/output_graph.py:1469] [0/0] Step 2: calling compiler function eager
I0102 12:18:06.501000 5178 torch/_dynamo/output_graph.py:1474] [0/0] Step 2: done compiler function eager
I0102 12:18:06.509000 5178 torch/fx/experimental/symbolic_shapes.py:4594] [0/0] produce_guards
V0102 12:18:06.510000 5178 torch/_dynamo/guards.py:2363] [0/0] [__guards] GUARDS:
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards]
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] TREE_GUARD_MANAGER:
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] +- RootGuardManager
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:500 in init_ambient_guards
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | +- GuardManager: source=L['end'], accessed_by=FrameLocalsGuardAccessor(key='end', framelocals_idx=0)
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | | +- EQUALS_MATCH: L['end'] == 7 # return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor('torch')
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 136730406958912) # return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].arange, accessed_by=GetAttrGuardAccessor(arange)
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].arange, 136730353679424) # return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | | +- GuardManager: source=G['device'], accessed_by=DictGetItemGuardAccessor('device')
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards] | | | +- EQUALS_MATCH: G['device'] == 'cpu' # return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:06.515000 5178 torch/_dynamo/guards.py:2320] [0/0] [__guards]
V0102 12:18:07.517000 5178 torch/_dynamo/guards.py:2345] [0/0] [__guards] Guard eval latency = 0.54 us
I0102 12:18:07.519000 5178 torch/_dynamo/pgo.py:642] [0/0] put_code_state: no cache key, skipping
I0102 12:18:07.523000 5178 torch/_dynamo/convert_frame.py:1068] [0/0] run_gc_after_compile: running gc
V0102 12:18:07.539000 5178 torch/_dynamo/convert_frame.py:1371] skipping: _fn (reason: in skipfiles, file: /usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] torchdynamo start compiling fn <ipython-input-1-71c43cf3ec21>:9, stack (elided 4 frames):
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] return _run_code(code, main_globals, None,
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] exec(code, run_globals)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py", line 37, in <module>
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] ColabKernelApp.launch_instance()
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py", line 992, in launch_instance
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] app.start()
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py", line 619, in start
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] self.io_loop.start()
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py", line 195, in start
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] self.asyncio_loop.run_forever()
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] self._run_once()
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] handle._run()
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] self._context.run(self._callback, *self._args)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 685, in <lambda>
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] lambda f: self._run_callback(functools.partial(callback, future))
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 738, in _run_callback
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] ret = callback()
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 825, in inner
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] self.ctx_run(self.run)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] yielded = self.gen.send(value)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 361, in process_one
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] yield gen.maybe_future(dispatch(*args))
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] yielded = ctx_run(next, result)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 261, in dispatch_shell
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] yield gen.maybe_future(handler(stream, idents, msg))
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] yielded = ctx_run(next, result)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 539, in execute_request
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] self.do_execute(
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] yielded = ctx_run(next, result)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py", line 302, in do_execute
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] res = shell.run_cell(code, store_history=store_history, silent=silent)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py", line 539, in run_cell
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 2975, in run_cell
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] result = self._run_cell(
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3030, in _run_cell
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] return runner(coro)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py", line 78, in _pseudo_sync_runner
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] coro.send(None)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3257, in run_cell_async
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3473, in run_ast_nodes
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] if (await self.run_code(code, result, async_=asy)):
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] exec(code_obj, self.user_global_ns, self.user_ns)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] File "<ipython-input-1-71c43cf3ec21>", line 15, in <cell line: 14>
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1] res = fn_cmp(end)
V0102 12:18:07.543000 5178 torch/_dynamo/convert_frame.py:950] [0/1]
I0102 12:18:07.547000 5178 torch/_dynamo/symbolic_convert.py:2744] [0/1] Step 1: torchdynamo start tracing fn <ipython-input-1-71c43cf3ec21>:9
I0102 12:18:07.551000 5178 torch/fx/experimental/symbolic_shapes.py:3221] [0/1] create_env
V0102 12:18:07.555000 5178 torch/_dynamo/symbolic_convert.py:956] [0/1] [__trace_source] TRACE starts_line <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:07.555000 5178 torch/_dynamo/symbolic_convert.py:956] [0/1] [__trace_source] return torch.arange(start=0, step=2, end=end, device=device)
V0102 12:18:07.558000 5178 torch/_dynamo/symbolic_convert.py:979] [0/1] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0102 12:18:07.560000 5178 torch/_dynamo/symbolic_convert.py:979] [0/1] [__trace_bytecode] TRACE LOAD_ATTR arange [PythonModuleVariable(<module 'torch' from '/usr/local/lib/python3.10/dist-packages/torch/__init__.py'>)]
V0102 12:18:07.562000 5178 torch/_dynamo/symbolic_convert.py:979] [0/1] [__trace_bytecode] TRACE LOAD_CONST 0 [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>)]
V0102 12:18:07.565000 5178 torch/_dynamo/symbolic_convert.py:979] [0/1] [__trace_bytecode] TRACE LOAD_CONST 2 [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0)]
V0102 12:18:07.567000 5178 torch/_dynamo/symbolic_convert.py:979] [0/1] [__trace_bytecode] TRACE LOAD_FAST end [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0), ConstantVariable(int: 2)]
V0102 12:18:07.568000 5178 torch/_dynamo/symbolic_convert.py:979] [0/1] [__trace_bytecode] TRACE LOAD_GLOBAL device [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0), ConstantVariable(int: 2), LazyVariableTracker()]
V0102 12:18:07.570000 5178 torch/_dynamo/symbolic_convert.py:979] [0/1] [__trace_bytecode] TRACE LOAD_CONST ('start', 'step', 'end', 'device') [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0), ConstantVariable(int: 2), LazyVariableTracker(), ConstantVariable(str: 'cpu')]
V0102 12:18:07.572000 5178 torch/_dynamo/symbolic_convert.py:979] [0/1] [__trace_bytecode] TRACE CALL_FUNCTION_KW 4 [TorchInGraphFunctionVariable(<built-in method arange of type object at 0x7c5ae3386860>), ConstantVariable(int: 0), ConstantVariable(int: 2), LazyVariableTracker(), ConstantVariable(str: 'cpu'), TupleVariable(length=4)]
V0102 12:18:07.575000 5178 torch/_dynamo/pgo.py:324] [0/1] automatic dynamic int L['end'] val 17 != 7
I0102 12:18:07.597000 5178 torch/fx/experimental/symbolic_shapes.py:4470] [0/1] create_symbol s0 = 17 for L['end'] [-int_oo, int_oo] return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn (_dynamo/variables/builder.py:1927 in wrap_symint), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="s0" or to suppress this message run with TORCHDYNAMO_EXTENDED_ADVICE="0"
V0102 12:18:07.602000 5178 torch/_dynamo/output_graph.py:2201] [0/1] create_graph_input L_end_ L['end'] s0 at debug_level 0 before=False
V0102 12:18:07.612000 5178 torch/fx/experimental/symbolic_shapes.py:5849] [0/1] _update_var_to_range s0 = VR[0, int_oo] (update)
I0102 12:18:07.616000 5178 torch/fx/experimental/symbolic_shapes.py:6328] [0/1] eval s0 >= 0 [guard added] return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn (_refs/__init__.py:5040 in arange), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="s0 >= 0"
V0102 12:18:07.698000 5178 torch/fx/experimental/symbolic_shapes.py:6459] [0/1] eval ((s0 + 1//2)) >= 0 == True [statically known]
I0102 12:18:07.711000 5178 torch/fx/experimental/symbolic_shapes.py:6328] [0/1] eval [guard suppressed] Ne((s0 + 1//2), 0) [guard added] return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn (utils/_stats.py:26 in wrapper), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Ne((s0 + 1//2), 0)"
V0102 12:18:07.720000 5178 torch/_dynamo/symbolic_convert.py:979] [0/1] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
I0102 12:18:07.724000 5178 torch/_dynamo/symbolic_convert.py:3066] [0/1] Step 1: torchdynamo done tracing fn (RETURN_VALUE)
V0102 12:18:07.728000 5178 torch/_dynamo/symbolic_convert.py:3070] [0/1] RETURN_VALUE triggered compile
V0102 12:18:07.731000 5178 torch/_dynamo/output_graph.py:979] [0/1] COMPILING GRAPH due to GraphCompileReason(reason='return_value', user_stack=[<FrameSummary file <ipython-input-1-71c43cf3ec21>, line 10 in fn>], graph_break=False)
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code] TRACED GRAPH
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code] ===== __compiled_fn_3 =====
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code] /usr/local/lib/python3.10/dist-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code] def forward(self, L_end_: "Sym(s0)"):
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code] l_end_ = L_end_
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code]
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code] # File: <ipython-input-1-71c43cf3ec21>:10 in fn, code: return torch.arange(start=0, step=2, end=end, device=device)
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code] arange: "i64[(s0 + 1//2)][1]cpu" = torch.arange(start = 0, step = 2, end = l_end_, device = 'cpu'); l_end_ = None
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code] return (arange,)
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code]
V0102 12:18:07.740000 5178 torch/_dynamo/output_graph.py:1362] [0/1] [__graph_code]
tensor([0, 2, 4, 6])
I0102 12:18:07.746000 5178 torch/_dynamo/output_graph.py:1469] [0/1] Step 2: calling compiler function eager
I0102 12:18:07.748000 5178 torch/_dynamo/output_graph.py:1474] [0/1] Step 2: done compiler function eager
I0102 12:18:07.755000 5178 torch/fx/experimental/symbolic_shapes.py:4594] [0/1] produce_guards
V0102 12:18:07.757000 5178 torch/fx/experimental/symbolic_shapes.py:4802] [0/1] track_symint L['end'] s0 None
V0102 12:18:07.763000 5178 torch/_dynamo/guards.py:2363] [0/1] [__guards] GUARDS:
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards]
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] TREE_GUARD_MANAGER:
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] +- RootGuardManager
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:500 in init_ambient_guards
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | +- GuardManager: source=L['end'], accessed_by=FrameLocalsGuardAccessor(key='end', framelocals_idx=0)
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | | +- TYPE_MATCH: ___check_type_id(L['end'], 101000398807840) # return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor('torch')
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 136730406958912) # return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | | | +- GuardManager: source=G['torch'].arange, accessed_by=GetAttrGuardAccessor(arange)
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].arange, 136730353679424) # return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | | +- GuardManager: source=G['device'], accessed_by=DictGetItemGuardAccessor('device')
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] | | | +- EQUALS_MATCH: G['device'] == 'cpu' # return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards] +- LAMBDA_GUARD: 0 <= L['end'] # return torch.arange(start=0, step=2, end=end, device=device) # <ipython-input-1-71c43cf3ec21>:10 in fn (_refs/__init__.py:5040 in arange)
V0102 12:18:07.766000 5178 torch/_dynamo/guards.py:2320] [0/1] [__guards]
V0102 12:18:08.770000 5178 torch/_dynamo/guards.py:2345] [0/1] [__guards] Guard eval latency = 0.70 us
I0102 12:18:08.773000 5178 torch/_dynamo/pgo.py:642] [0/1] put_code_state: no cache key, skipping
I0102 12:18:08.775000 5178 torch/_dynamo/convert_frame.py:1068] [0/1] run_gc_after_compile: running gc
tensor([ 0, 2, 4, 6, 8, 10, 12, 14, 16])
tensor([ 0, 2, 4, 6, 8, 10, 12])
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241231+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4400.44
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.6.0.74
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvtx==0.2.10
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.6.0.dev20241231+cpu
[pip3] torchaudio==2.6.0.dev20250101+cpu
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.22.0.dev20250101+cpu
[conda] Could not collect
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @isuruf @anijain2305
| true
|
2,765,896,004
|
[Feat]: Add Multithreading support for kleidiai groupwise GEMM kernels
|
nikhil-arm
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 3
|
COLLABORATOR
|
KleidiAI Groupwise GEMM Kernel was not 2D Blocked. This change adds supports for 2D blocking of GEMM kernel to efficiently split workload & speedup GEMM kernel over multiple threads.
Performance improvements:
7B model Pre-fill speedup from 145 t/s to 175 t/s
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @BoyuanFeng
| true
|
2,765,848,497
|
Avoid overflow in vector_norm for scalar input
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 16
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144073
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Fixes https://github.com/pytorch/pytorch/issues/143960 where torch.dist gave different results from eager due to vector_norm overflowing and eager mode avoids the overflow for single element reductions by not computing the power and then the root.
| true
|
2,765,672,368
|
Compile error for custom op with optional mutable tensor list argument
|
jerrychenhf
|
closed
|
[
"triaged",
"module: custom-operators",
"module: functionalization",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
It showed that the Torch auto functionalization doesn't support custom op with optional mutable tensor list argument.
The following code shows this problem. "Tensor(a!)[]? out_list" argument of the op is not supported for auto functionalization:
```
import torch
@torch.library.custom_op("mylib::mysin", mutates_args=["out_list"], schema="(Tensor x, Tensor(a!)[]? out_list) -> (Tensor)")
def mysin(x: torch.Tensor, out_list: list[torch.Tensor] = None) -> torch.Tensor:
r = x.sin()
return r
@torch.library.register_fake("mylib::mysin")
def mysin_fake(x, out_list: list[torch.Tensor] = None) -> torch.Tensor:
return torch.empty_like(x)
def fn(x):
x = x * 3
s = [torch.empty_like(x)]
x= mysin(x, out_list=s)
x = x / 3
return x
fn = torch.compile(fn)
x = torch.randn(3, requires_grad=False)
y= fn(x)
print(y)
```
When executing the above code, the following exception happens:
```
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_subclasses/functional_tensor.py", line 535, in __torch_dispatch__
outs_unwrapped = func._op_dk(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Found a custom (non-ATen) operator whose output has alias annotations: mylib::mysin(Tensor x, Tensor(a!)[]? out_list) -> Tensor. We only support functionalizing operators whose outputs do not have alias annotations (e.g. 'Tensor(a)' is a Tensor with an alias annotation whereas 'Tensor' is a Tensor without. The '(a)' is the alias annotation). The alias annotation specifies that the output Tensor shares storage with an input that has the same annotation. Please check if (1) the output needs to be an output (if not, don't return it), (2) if the output doesn't share storage with any inputs, then delete the alias annotation. (3) if the output indeed shares storage with an input, then add a .clone() before returning it to prevent storage sharing and then delete the alias annotation. Otherwise, please file an issue on GitHub.
While executing %x_1 : [num_users=1] = call_function[target=torch.ops.mylib.mysin.default](args = (%x,), kwargs = {out_list: [%empty_like]})
Original traceback:
File "/home/haifchen/working/test/test-custom-op-alias.py", line 136, in fn
x= mysin(x, out_list=s)
File "/home/haifchen/working/envs/env_test_nightly/lib/python3.10/site-packages/torch/_library/custom_ops.py", line 669, in __call__
return self._opoverload(*args, **kwargs)
```
If we change the custom op schema to "(Tensor x, Tensor(a!)[] out_list) -> (Tensor)", it works.
Is there any fundamental difficulties to support optional mutable tensor list argument?
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20240914+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 4190.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] torch==2.6.0.dev20240914+cpu
[conda] Could not collect
cc @bdhirsh @ezyang @chauhang @penguinwu @zou3519 @yf225
| true
|
2,765,649,409
|
torch-nightly doesn't support tesla v100
|
Serenagirl
|
open
|
[
"needs reproduction",
"module: binaries",
"module: cuda",
"triaged"
] | 6
|
NONE
|
### 🐛 Describe the bug
env:TeslaV100,driver 560.35.03 cuda 12.4
use pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124
python:
import torch
print(torch.randn(5, 5).to(device)+torch.randn(5, 5).to(device))
but cuda12.4 supports v100,i can't find which torch version supports v100+cuda12.4 or other gpu
<img width="569" alt="1·33" src="https://github.com/user-attachments/assets/b7b0c42d-65ee-412a-aabf-cc1f2fc502c2" />
### Versions
2.6.0
cc @seemethere @malfet @osalpekar @atalman @ptrblck @msaroufim @eqy @ezyang @gchanan @zou3519 @kadeng
| true
|
2,765,646,098
|
Fix torch.normal ignores default_device
|
zeshengzong
|
closed
|
[
"module: distributions",
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 12
|
CONTRIBUTOR
|
Fixes #122886
1. Enable `torch.normal` working with `DeviceContext` to get default device which set via `set_default_device`.
2. Add hint in `set_default_device` doc, suggest use `torch.Tensor.to` method move to desired device explicitly.
**Test Result**
1. **Doc Preview**

2. **Local Test**
```python
>>> import torch
>>> torch.normal(0.,1., (10,10)).device
device(type='cpu')
>>> torch.set_default_device('cuda')
>>> torch.normal(0.,1., (10,10)).device
device(type='cuda', index=0)
```
```bash
pytest test/test_tensor_creation_ops.py
```

```bash
lintrunner
```

cc @fritzo @neerajprad @alicanb @nikitaved
| true
|
2,765,632,914
|
[AMD] [ROCm] Numerical difference between Pytorch 2.6.0.dev of ROCm 6.2 and ROCm 6.3
|
tjtanaa
|
closed
|
[
"high priority",
"module: rocm",
"triaged"
] | 5
|
NONE
|
### 🐛 Describe the bug
# Description
I am getting different numerical output results between Pytorch 2.6.0.dev of ROCm 6.2 and ROCm 6.3.
All the tests in the https://github.com/linkedin/Liger-Kernel/pull/506 pass with PyTorch 2.6.0.dev of ROCm 6.2
However, one of the tests fail in the environment with PyTorch 2.6.0.dev of ROCm 6.3.
- Only one failed test in ROCm 6.3
- FAILED CONVERGENCE TEST
```
====================================================== short test summary info =======================================================
FAILED test/convergence/test_mini_models.py::test_mini_model[mini_mllama-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] - AssertionError: Number of mismatched elements: 2
Mismatch at index (0, 7): tensor1[(0, 7)] = 3.0651497840881348, tensor2[(0, 7)] = 3.0652356147766113
Mismatch at index (0, 9): tensor1[(0, 9)] = 1.470238447189331, tensor2[(0, 9)] = 1.4702625274658203
======================================== 1 failed, 16 passed, 2 warnings in 94.82s (0:01:34) ==================
```
# Steps to reproduce:
1. Launch docker container
```
#!/bin/bash
sudo docker run -it \
--network=host \
--group-add=video \
--ipc=host \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--device /dev/kfd \
--device /dev/dri \
-v <path-to-Liger-Kernel>:/liger-kernel-workspace \
rocm/pytorch:rocm6.3_ubuntu22.04_py3.10_pytorch_release_2.5.0_preview \
/bin/bash
```
2. Setup
```
python -m pip uninstall -y torch torchvision triton
python -m pip install -e .[dev] --extra-index-url https://download.pytorch.org/whl/nightly/rocm6.3
python -m pip install triton==3.1.0
```
3. Run tests
``` make test-convergence ```
https://github.com/linkedin/Liger-Kernel/pull/506
### Versions
```bash
root@root:/liger-kernel-workspace# python collect_env.py /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_pytree.py:185: FutureWarning: optree is installed but the version is too ol
d to support PyTorch Dynamo in C++ pytree. C++ pytree support is disabled. Please consider upgrading optree using `python3 -m pip install --u
pgrade 'optree>=0.13.0'`.
warnings.warn(
Collecting environment information...
PyTorch version: 2.6.0.dev20241231+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 18.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.3.1 24463 5d263cec09eef95b1ed5f3e7f6a578c616efa0a4)
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI210 (gfx90a:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 5879.8818
CPU min MHz: 3000.0000
BogoMIPS: 8983.59
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.1
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.12.1
[pip3] pytorch-triton-rocm==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241231+rocm6.3
[pip3] triton==3.1.0
[conda] No relevant packages
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,765,631,970
|
redundant recompilation caused by duplicated Sym()
|
MetaBlues
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo",
"recompilations"
] | 4
|
NONE
|
### 🐛 Describe the bug
Hello
I've been trying to reduce the number of recompiles during Megatron training recently and noticed that strange recompiles happenned on RMSNorm.
```
@torch.compile(dynamic=True)
def rmsnorm_without_weight(hidden_states, eps=1e-6, dtype=torch.bfloat16):
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(variance + eps)
hidden_states = hidden_states.to(dtype)
return hidden_states
```
I compared the two compilation results from tlparse, and found that the only difference between the two kernels was the duplicated "Sym()" of arg0_1:
```
# kernel 1:
def forward(self, arg0_1: "Sym(s0)", arg1_1: "Sym(s1)", arg2_1: "bf16[s0, 1, s1][s1, s1, 1]cuda:0"):
# kernel 2:
def forward(self, arg0_1: "Sym(s2)", arg1_1: "Sym(s1)", arg2_1: "bf16[s0, 1, s1][s1, s1, 1]cuda:0"):
```
And this caused the guard failure:
```
L['hidden_states'].size()[0]*L['hidden_states'].size()[2] < 2147483648 # kernel 1
L['hidden_states'].size()[2]*L['hidden_states'].size()[0] < 2147483648 # kernel 2
```
I don't understand under what circumstances such a strange aliasing phenomenon would occur. And I want to figure out how to prevent recompiles like this.
### Error logs
tlparse output of Kernel 1
[2_0_0.zip](https://github.com/user-attachments/files/18288896/2_0_0.zip)
tlparse output of Kernel 2
[2_4_0.zip](https://github.com/user-attachments/files/18288898/2_4_0.zip)
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.1
Libc version: glibc-2.31
Python version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-3.0.3.kwai.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H800
GPU 1: NVIDIA H800
GPU 2: NVIDIA H800
GPU 3: NVIDIA H800
GPU 4: NVIDIA H800
GPU 5: NVIDIA H800
GPU 6: NVIDIA H800
GPU 7: NVIDIA H800
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8468
Stepping: 8
CPU MHz: 2100.000
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Virtualization: VT-x
L1d cache: 4.5 MiB
L1i cache: 3 MiB
L2 cache: 192 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cudnn-frontend==1.6.1
[pip3] nvtx==0.2.10
[pip3] pytorch-lightning==1.9.5
[pip3] torch==2.5.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torchcde==0.2.5
[pip3] torchcfm==1.0.5
[pip3] torchdiffeq==0.2.4
[pip3] torchdyn==1.0.6
[pip3] torchmetrics==1.5.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.17.2+c1d70fe
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,765,627,121
|
[dynamo][BE] move `zip_longest` polyfill to submodule `polyfills.itertools`
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144067
* #144066
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,765,626,616
|
[dynamo][BE] move `dropwhile` polyfill to submodule `polyfills.itertools`
|
XuehaiPan
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144067
* __->__ #144066
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,765,615,681
|
[cpu][vec] support reduce ops for add and max
|
Valentine233
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
### Description
During the support of INT8 SDPA https://github.com/pytorch/ao/pull/1372, we find that `at::vec::vec_reduce_all<int32_t>` would go into slow scalar path when doing sum and max. So here, we support the two reduce-related ops `reduce_add` and `reduce_max` for `vec512` and `vec256`, using the Sequence instructions.
### Details
- Support vectorized `reduce_add` and `reduce_max` for dtypes `int32` and `float32`, using the Sequence instructions;
- Implement the scalar version for fallback path in vec base;
- Add the operator `reduce` in vec base, in order to simplify the codes.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,765,615,306
|
Support nanj in inductor
|
isuruf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144064
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
Fixes https://github.com/pytorch/pytorch/issues/144029
| true
|
2,765,559,063
|
When I use the optimizer, there is no gradient due to the use of unit8, but I have to use unit8
|
wang1528186571
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
def apply_relighting_tensor(tensor, alpha, beta):
tensor = tensor * 255.0
new_tensor = tensor.to(torch.uint8)
new_tensor = new_tensor * alpha + beta / 255.0
new_tensor = torch.abs(new_tensor)
new_tensor = new_tensor.to(torch.float32)
new_tensor = new_tensor / 255.0
return new_tensor
I know that this value is normal, but it has no gradient
But when I use floating point numbers, the effect of the generated image is not what I want at all. I don't know what to do.
def apply_relighting_tensor(tensor, alpha, beta):
tensor_float = tensor * 255.0
new_tensor = tensor_float * alpha + beta
new_tensor = torch.clamp(new_tensor, 0, 255)
new_tensor = new_tensor / 255.0
return new_tensor
| true
|
2,765,470,431
|
[dynamo][dicts] Remove special casing for SUPPORTED_NODES and sys.modules
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144062
* #144061
* #143997
* #144160
* #144158
* #144141
* #144130
* #144129
After https://github.com/pytorch/pytorch/pull/143997, the special casing is not required.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,765,470,366
|
[dynamo][refactor] Collect dict like variable building in one place
|
anijain2305
|
closed
|
[
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144062
* __->__ #144061
* #143997
* #144160
* #144158
* #144141
* #144130
* #144129
| true
|
2,765,465,035
|
call dist.nn.all_reduce then compute loss with torch.logdet().sum() raise grad Tensors must be contiguous error
|
ultranity
|
closed
|
[
"oncall: distributed",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
BG: error when verifying #58005, where batch computations like torch.logdet and torch.sum will raise Error: grad Tensors must be contiguous error
repoduce code:
```
import torch
import torch.distributed as dist
import torch.distributed.nn
from functools import partial
def worker(gpu, USE_NN_REDUCE = 0):
dist.init_process_group(
backend="nccl", init_method="tcp://localhost:12345", world_size=2, rank=gpu
)
torch.cuda.set_device(gpu)
torch.manual_seed(gpu)
torch.cuda.manual_seed_all(gpu)
x = torch.randn((2, 2, 3), device='cuda', requires_grad=True)
xx = torch.nn.functional.normalize(x, p=2, dim=-1)
cov = torch.stack([W.T.matmul(W) for W in xx])
if USE_NN_REDUCE:
cov=dist.nn.all_reduce(cov)
else:
dist.all_reduce(cov)
print("Value after all_reduce:", cov)
y = torch.logdet(torch.ones((3,3), device=gpu)+ 0.1*cov).sum()
#y = sum([torch.logdet(torch.ones((3,3), device=gpu)+ 0.1*cov[i]) for i in range(2)])
y.backward()
print(f"{USE_NN_REDUCE=}, {gpu=}, {y=}, {x.grad=}")
nn_worker = partial(worker, USE_NN_REDUCE=1)
def local():
torch.manual_seed(0)
torch.cuda.manual_seed_all(0)
x0 = torch.randn((2, 2, 3), device='cuda')
torch.manual_seed(1)
torch.cuda.manual_seed_all(1)
x1 = torch.randn((2, 2, 3), device='cuda')
x = torch.cat([x0, x1], dim=1)
x = x.requires_grad_()
xx = torch.nn.functional.normalize(x, p=2, dim=-1)
xx_all_reduce = torch.stack([W.T.matmul(W) for W in xx])
print(f"truth: xx_all_reduce={xx_all_reduce}")
y = torch.logdet(torch.ones((3,3), device='cuda')+ 0.1*xx_all_reduce).sum()
y.backward()
print(f"truth: {y=}")
print(f"truth: grad={x.grad}")
if __name__ == "__main__":
#dist.init_process_group(backend="nccl")
# if dist.get_rank()==0:
# local()
# worker(dist.get_rank())
# nn_worker(dist.get_rank())
local()
torch.multiprocessing.spawn(worker, nprocs=2)
torch.multiprocessing.spawn(nn_worker, nprocs=2)
```
Traceback:
```
File "torch/multiprocessing/spawn.py", line 90, in _wrap
fn(i, *args)
File "test_reduce.py", line 22, in worker
y.backward()
File "torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "torch/autograd/__init__.py", line 347, in backward
enginerun_backward(
File "torch/autograd/graph.py", line 825, in enginerun_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "torch/distributed/nn/functional.py", line 452, in backward
return (None, None) + (_AllReduce.apply(ctx.op, ctx.group, grad_output),)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/distributed/nn/functional.py", line 447, in forward
dist.all_reduce(tensor, op=op, group=group)
File "torch/distributed/c10d_logger.py", line 83, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "torch/distributed/distributed_c10d.py", line 2501, in all_reduce
work = group.allreduce([tensor], opts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Tensors must be contiguous
```
we can bypass the error by alt L20 `y = torch.logdet(torch.ones((3,3), device=gpu)+ 0.1*cov).sum()`
to L21 `y = sum([torch.logdet(torch.ones((3,3), device=gpu)+ 0.1*cov[i]) for i in range(2)])`
or `y=torch.stack([torch.logdet(torch.ones((3,3), device=gpu)+ 0.1*cov[i]) for i in range(2)]).sum()`
but it indicates there maybe some error similar with issue #73515
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.31
Python version: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:23:07) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H800
GPU 1: NVIDIA H800
GPU 2: NVIDIA H800
GPU 3: NVIDIA H800
GPU 4: NVIDIA H800
GPU 5: NVIDIA H800
GPU 6: NVIDIA H800
GPU 7: NVIDIA H800
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 224
On-line CPU(s) list: 0-223
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8480+
Stepping: 8
Frequency boost: enabled
CPU MHz: 3000.019
CPU max MHz: 2001.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 5.3 MiB
L1i cache: 3.5 MiB
L2 cache: 224 MiB
L3 cache: 210 MiB
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-benchmark==0.3.6
[pip3] pytorch-lightning==2.3.1
[pip3] pytorch-memlab==0.3.0
[pip3] pytorch-triton==3.0.0
[pip3] torch==2.5.1+cu124
[pip3] torch-fidelity==0.3.0
[pip3] torch-flops==0.3.5
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.7.1
[pip3] torchmetrics==1.6.0
[pip3] torchshow==0.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] lovely-numpy 0.2.13 pypi_0 pypi
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-benchmark 0.3.6 pypi_0 pypi
[conda] pytorch-lightning 2.3.1 pypi_0 pypi
[conda] pytorch-memlab 0.3.0 pypi_0 pypi
[conda] pytorch-triton 3.0.0 pypi_0 pypi
[conda] torch 2.5.1+cu124 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-flops 0.3.5 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchdata 0.10.1 pypi_0 pypi
[conda] torchmetrics 1.6.0 pypi_0 pypi
[conda] torchshow 0.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,765,450,146
|
[Inductor][CPP] Fix Inductor integer avg pool
|
DDEle
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 5
|
CONTRIBUTOR
|
Fixes #143738. Currently the scaler for averaging is rounded to 0 if dtype is an integer, resulting to all-zero output. This fix uses `truediv` instead for integer cases.
## Test
```bash
pytest -vs ./test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCPU::test_comprehensive_nn_functional_avg_pool1d_cpu_int64
pytest -vs ./test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCPU::test_comprehensive_nn_functional_avg_pool2d_cpu_int64
pytest -vs ./test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCPU::test_comprehensive_nn_functional_avg_pool3d_cpu_int64
pytest -vs ./test/inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCPU::test_comprehensive_nn_functional_local_response_norm_cpu_int64
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,765,404,286
|
[Inductor] Fix `torch.polygamma()` when n == 0
|
shink
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 7
|
CONTRIBUTOR
|
Fixes #143648
aten:
https://github.com/pytorch/pytorch/blob/dec1a6d0f05f838dcec10492ef6091501258f816/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp#L436-L447
compiled kernel code:
```
cpp_fused_polygamma_0 = async_compile.cpp_pybinding(['const float*', 'float*'], '''
#include "/tmp/torchinductor_devuser/tmpi1d9ksww/db/cdb7hyptwxpzukwd42x4ajfjlgrpum4a4htdd6lhb65apclsmno4.h"
extern "C" void kernel(const float* in_ptr0,
float* out_ptr0)
{
{
{
{
auto tmp0 = in_ptr0[static_cast<int64_t>(0L)];
auto tmp1 = static_cast<float>(0.0);
auto tmp2 = tmp1 == 0 ? calc_digamma(tmp0) : calc_polygamma(tmp0, tmp1);
out_ptr0[static_cast<int64_t>(0L)] = tmp2;
}
}
}
}
''')
```
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,765,378,690
|
[Inductor UT] Generalize device-bias code in test_torchinductor.py introduced by #143884.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144057
Fix #144056
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,765,377,255
|
[Break XPU] Hard code “cuda” in GPU test case introduced by #143884 cause failure on XPU.
|
etaf
|
closed
|
[
"triaged",
"module: xpu"
] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
The PR #143884 introduced a new test case torch/_inductor/test_torchinductor.py:test_donated_buffer_inplace_gpt which is not specified requires_cuda but hard code device type cuda, cause it fails on XPU.
https://github.com/pytorch/pytorch/blob/dec1a6d0f05f838dcec10492ef6091501258f816/test/inductor/test_torchinductor.py#L13369-L13379
### Versions
PyTorch version: 2.6.0a0+gitdec1a6d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,765,359,612
|
[MPS] Handle implicit cpu-scalar-to-gpu transfer
|
malfet
|
closed
|
[
"Merged",
"release notes: mps",
"ciflow/mps"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144084
* #144083
* #144051
* #144050
* __->__ #144055
Followup after https://github.com/pytorch/pytorch/pull/143934, this check is no longer necessary and fixes a subset of inductor tests
Before `pytest test/inductor/test_torchinductor.py -k _mps` reports 463
failed, 291 passed, 32 skipped after 456 failed, 298 passed, 32 skipped
| true
|
2,765,301,857
|
item() on DTensor only grabs the local tensor
|
ad8e
|
closed
|
[
"oncall: distributed",
"triaged",
"module: dtensor"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
An example of a tensor for which the local tensor is insufficient is a norm, which is sharded across many GPUs.
I have not run this testcase because I don't have a convenient 2-GPU system, but the correct print would be `8` (norm of the whole tensor), and I expect this to print `5.65 = 4sqrt(2)` (norm of half the tensor). `torchrun --standalone --nnodes=1 --nproc-per-node=2 dtensor_2nodes.py`
```
import torch
from torch.distributed._tensor import DTensor, Shard, Replicate, distribute_tensor, distribute_module, init_device_mesh
import os
import torch.distributed as dist
import datetime
import torch.nn as nn
import torch.nn.functional as F
world_size = int(os.getenv("WORLD_SIZE", None))
local_rank = int(os.getenv("LOCAL_RANK", None))
global_rank = int(os.getenv("RANK", None))
print(f"world_size: {world_size}, local_rank: {local_rank}, global_rank: {global_rank}")
dist.init_process_group(
backend="cuda:nccl",
init_method=None,
world_size=world_size,
rank=global_rank,
device_id=torch.device(f"cuda:{local_rank}"),
timeout=datetime.timedelta(seconds=120),
)
device_mesh = init_device_mesh("cuda", (2,))
rowwise_placement = [Shard(0)]
local_tensor = torch.randn((8, 8), requires_grad=True)
rowwise_tensor = DTensor.from_local(local_tensor, device_mesh, rowwise_placement)
print("correct would be 8, but it's probably 5.65", rowwise_tensor.norm().item())
```
### Versions
```
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.13
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
| true
|
2,765,235,593
|
cuDNN version is not detected correctly in PyTorch
|
celestinoxp
|
closed
|
[
"module: cudnn",
"module: cuda",
"triaged"
] | 4
|
NONE
|
### 🐛 Describe the bug
I am experiencing issues with PyTorch not detecting the correct version of cuDNN. Here’s the setup:
I installed Nightly PyTorch 2.6 using the following command:
```python
pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu118
```
I installed also the latest supported version of cuDNN for CUDA 11.8, which is cuDNN 9.6. see: https://docs.nvidia.com/deeplearning/cudnn/latest/reference/support-matrix.html
When running the following code to check the cuDNN version:
```python
import torch
print(f"CUDNN version: {torch.backends.cudnn.version()}")
```
the returned version is 90100 (cuDNN 9.1), but it should be 90600 (cuDNN 9.6).
### Versions
PyTorch version: 2.6.0.dev20241231+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home (10.0.22631 64 bits)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:06:27) [MSC v.1942 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650 Ti
Nvidia driver version: 566.36
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\cudnn_ops64_9.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD Ryzen 7 4800H with Radeon Graphics
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2900
MaxClockSpeed: 2900
L2CacheSize: 4096
L2CacheSpeed: None
Revision: 24577
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy==1.14.0
[pip3] mypy_extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.8.0
[pip3] torch==2.6.0.dev20241231+cu118
[pip3] torchvision==0.22.0.dev20241230+cu118
[conda] _anaconda_depends 2024.10 py312_mkl_0
[conda] blas 1.0 mkl
[conda] mkl 2023.2.0 h6a75c08_49573 conda-forge
[conda] mkl-service 2.4.1 py312h0ad82dd_1 conda-forge
[conda] mkl_fft 1.3.10 py312h98b3aff_1 conda-forge
[conda] mkl_random 1.2.8 py312hb562361_0 conda-forge
[conda] numpy 1.26.4 py312hfd52020_0
[conda] numpy-base 1.26.4 py312h4dde369_0
[conda] numpydoc 1.8.0 pyhd8ed1ab_1 conda-forge
[conda] torch 2.6.0.dev20241231+cu118 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu118 pypi_0 pypi
cc @csarofeen @ptrblck @xwang233 @eqy @msaroufim
| true
|
2,765,234,427
|
Fix dangling autogenerated sphinx source code links
|
Impaler343
|
open
|
[
"triaged",
"open source",
"topic: docs",
"module: python frontend"
] | 8
|
NONE
|
Fixes #143910
Broken source links can be fixed by adding return types for the functions.
Seems like almost all of the functions in ```_tensor.py``` have this problem and I've tried to address a few of them.
Few of the return types are not constant in type or number for which I have no solution
cc @albanD
| true
|
2,765,225,848
|
[MPSInductor] Preserve dtype during load
|
malfet
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144122
* #144084
* #144083
* #144050
* #144105
* __->__ #144051
* #144055
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,765,225,825
|
[MPSInductor] Fix multi rangevar kernel invocation
|
malfet
|
closed
|
[
"Merged",
"topic: improvements",
"release notes: mps",
"ciflow/mps",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143966
* #144084
* #144083
* __->__ #144050
* #144156
* #144105
* #144122
* #144051
* #144055
By changing `thread_position_in_grid` type to uint{n} and passing
dimentions during the kernel call
`pytest test/inductor/test_torchinductor.py -k _mps` score is 445 failed, 309 passed, 32 skipped
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,765,205,342
|
Add CUDA aarch64 triton wheel build
|
Skylion007
|
closed
|
[
"open source",
"Stale",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 2
|
COLLABORATOR
|
Create aarch64 triton wheel build
| true
|
2,765,185,622
|
Dynamo is not supported on Python 3.13+
|
Vectorrent
|
closed
|
[
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
I recently updated my system (Arch Linux), and with that came an upgrade to Python v3.13.1.
Since then, I have had trouble with code that used to work, in older versions of Python. For example, the error below comes from `torch.compile` being used with FlexAttention, in the [bytelatent](https://github.com/facebookresearch/blt) project (at run-time).
I will probably use Conda or Docker to fix this, but it would be better to address these bugs upstream (which is why I'm reporting it). Let me know if you need any more info.
### Error logs
```
Traceback (most recent call last):
File "/home/crow/repos/praxis/run.py", line 992, in <module>
model = AutoModelForCausalLM.from_config(config)
File "/home/crow/repos/praxis/.venv/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 440, in from_config
return model_class._from_config(config, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/home/crow/repos/praxis/.venv/lib/python3.13/site-packages/transformers/modeling_utils.py", line 1509, in _from_config
model = cls(config, **kwargs)
File "/home/crow/repos/praxis/praxis/modeling_praxis.py", line 95, in __init__
super().__init__(config)
~~~~~~~~~~~~~~~~^^^^^^^^
File "/home/crow/repos/praxis/praxis/modeling_praxis.py", line 26, in __init__
from praxis.modules.encoder import PraxisByteLatentEncoder
File "/home/crow/repos/praxis/praxis/modules/encoder.py", line 25, in <module>
from bytelatent import base_transformer
File "/home/crow/repos/praxis/.venv/lib/python3.13/site-packages/bytelatent/base_transformer.py", line 19, in <module>
flex_attention_comp = torch.compile(flex_attention)
File "/home/crow/repos/praxis/.venv/lib/python3.13/site-packages/lightning/fabric/wrappers.py", line 406, in _capture
return compile_fn(*args, **kwargs)
File "/home/crow/repos/praxis/.venv/lib/python3.13/site-packages/torch/__init__.py", line 2416, in compile
raise RuntimeError("Dynamo is not supported on Python 3.13+")
RuntimeError: Dynamo is not supported on Python 3.13+
```
### Versions
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: Could not collect
Libc version: glibc-2.40
Python version: 3.13.1 (main, Dec 4 2024, 18:05:56) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.12.7-arch1-1-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce GTX 960
GPU 1: NVIDIA GeForce GTX 1070
Nvidia driver version: 565.77
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.5.1
/usr/lib/libcudnn_adv.so.9.5.1
/usr/lib/libcudnn_cnn.so.9.5.1
/usr/lib/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/libcudnn_graph.so.9.5.1
/usr/lib/libcudnn_heuristic.so.9.5.1
/usr/lib/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
CPU family: 6
Model: 60
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 3
CPU(s) scaling MHz: 98%
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 7998.77
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch_optimizer==3.3.2
[pip3] torch==2.5.1
[pip3] torchmetrics==1.6.1
[conda] Could not collect
```
cc @chauhang @penguinwu
| true
|
2,765,179,040
|
Propagate callable parameter types using ParamSpec (#142306)
|
yijun-lee
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 17
|
CONTRIBUTOR
|
Fixes #142306
This PR includes typing improvements and refactoring for the following files:
- __init__.py
- decorators.py
- _ops.py
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,764,980,567
|
switch Windows XPU build to vs2019.
|
xuhancn
|
closed
|
[
"module: windows",
"open source",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"module: xpu"
] | 5
|
COLLABORATOR
|
Fixes #ISSUE_NUMBER
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,764,944,198
|
With FSDP2, a small tensor on a 1-GPU world size has grad=0
|
ad8e
|
open
|
[
"oncall: distributed",
"triaged",
"module: fsdp"
] | 9
|
CONTRIBUTOR
|
### 🐛 Describe the bug
I train a model normally, and one of the parameters remains at 0 throughout the run. Its grad is always zero, but it should be a large value.
Ablations:
If I use world_size 8, I don't see this. The parameter moves and the grad is 30000 rather than 0.
If I change the parameter from shape (1,) to shape (1, 1), the grad remains 0.
If I change the parameter from shape (1,) to shape (2,) and use `torch.mean(my_param)`, the grad remains 0.
If I change the parameter from shape (1,) to shape (10000000,) and use `torch.mean(my_param)`, the grad is non-zero and the parameter trains normally.
If I change the parameter from shape (1,) to shape (), I am told `ValueError: fully_shard doesn't support salar parameters. Change log_multiplier to a 1D tensor with numel equal to 1.` (This is a known limitation.)
I use 3 other instances of this class. Those instances do not have this problem and their gradient works normally. All the instances which have problems are in FSDP2 submodules, and all the instances without problems are in the root FSDP2 module. I do not know if this is connected. These other instances have numel 256, 768, and 1.
When I check `my_parameter.requires_grad`, it's `True`.
I unfortunately am too busy to produce a repro in the next week. Some brief attempts at creating one from scratch did not produce this error. For example, the following code works fine and does not exhibit the error (it's not a repro):
```
# torchrun --standalone --nnodes=1 --nproc-per-node=1 fsdp2_1nodes.py
import torch
from torch.distributed._tensor import DTensor, Shard, Replicate, distribute_tensor, distribute_module, init_device_mesh
import os
import torch.distributed as dist
import datetime
import torch.nn as nn
import torch.nn.functional as F
from torch.distributed._composable.fsdp import fully_shard, MixedPrecisionPolicy
world_size = int(os.getenv("WORLD_SIZE", None))
local_rank = int(os.getenv("LOCAL_RANK", None))
global_rank = int(os.getenv("RANK", None))
print(f"world_size: {world_size}, local_rank: {local_rank}, global_rank: {global_rank}")
dist.init_process_group(
backend="cpu:gloo,cuda:nccl",
init_method=None,
world_size=world_size,
rank=global_rank,
device_id=torch.device(f"cuda:{local_rank}"),
timeout=datetime.timedelta(seconds=120),
)
torch.cuda.set_device(local_rank)
device_mesh = init_device_mesh("cuda", (1,))
model = torch.nn.Linear(8, 8, device=f"cuda:{local_rank}")
class ExpMultiply(nn.Module):
def __init__(self, shape=(1, 1), starting_value=0.0):
super().__init__()
# any init here would not survive FSDP1/FSDP2 sharding.
# shape must be 1D instead of 0D to make FSDP1 happy. "ValueError: FSDP doesn't support salar parameters. Change resnets.0.resnet_blocks.0.skip_projection_multiplier to a 1D tensor with numel equal to 1."
self.log_multiplier = torch.nn.Parameter(
torch.empty(shape, dtype=torch.float32, device=f"cuda:{local_rank}")
)
self.starting_value = starting_value
def init_weights(self, generator=None):
# torch.nn.init.constant_(self.log_multiplier, self.starting_value)
torch.nn.init.zeros_(self.log_multiplier)
def forward(self, x):
return torch.mul(x, torch.exp(self.log_multiplier + self.starting_value))
a = ExpMultiply((1,), 1.0)
fully_shard(model)
fully_shard(a)
nn.init.zeros_(model.weight)
a.init_weights()
print(model.weight)
a(model(torch.randn(8, 8, device=f"cuda:{local_rank}"))).sum().backward()
print(model.weight.grad)
print(a.log_multiplier.grad)
```
To exhibit the issue, `a.log_multiplier.grad` would have to be 0. It is not.
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] DISTS-pytorch==0.1
[pip3] gpytorch==1.13
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.0
[pip3] torchdiffeq==0.2.5
[pip3] torchsde==0.2.6
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
| true
|
2,764,774,912
|
[inductor] Refactor CachingAutotuner so that it can pickle
|
jansel
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144288
* __->__ #144044
These are refactors needed for #144288
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,764,736,403
|
Fixed doc where more than one device specified since only one device is used (#17553)
|
Stacie-Herda
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Fixes #17553
| true
|
2,764,717,067
|
[ScaledMM] Fix NaNs in test for garbage input data
|
drisspg
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #144042
| true
|
2,764,692,066
|
[Inductor] Generalize tiling algorithm to handle fused reductions
|
blaine-rister
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
# Issue
This PR cleans up an edge case that wasn't handled by https://github.com/pytorch/pytorch/pull/137243. The existing tiling code assumes that `node.get_ranges()` is a reliable source of pointwise and reduction numels. This is true for pointwise kernels, but the situation is more complicated with reductions. Since reductions change the number of elements in a tensor, not all ops within a reduction kernel will have the same number of iterations. For example, `var_mean` fuses pointwise division with the output of reduction sum, and the division lacks the corresponding reduction ranges.
# Fix
Instead of getting numels from `node.get_ranges()`, explicitly pass the global pointwise and reduction numels to the relevant tiling functions. In `SIMDKernel.complete_partial_tiling`, we solve for the missing numel by diving the global numel by the partial tiling's numel. This ensures all tilings have the correct global numel.
Also, in `SIMDKernel.is_compatible`, add the global reduction numel to node ranges that are missing it. For example, `{"x": 8, "r0_": 8}` is compatible with a node of ranges `([8], [])` when we have `reduction_numel=8`.
Finally, this PR generalizes some of the existing codegen to handle multiple reduction dims. We already had code to ignore reduction splits for pointwise kernels, but it only worked for 1D reductions. Now it can handle ND.
# Test plan
This PR parametrizes the existing CI test for `var_mean` to also run with tiled reductions. It also adds a new test checking that `var_mean` generates 2D tilings (with tiled reduction enabled). These new tests would fail on the current main branch.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,764,685,152
|
Torch.sparse.mm failing gradient computation at half precision.
|
tanayarora09
|
open
|
[
"module: sparse",
"triaged",
"module: half"
] | 0
|
NONE
|
### 🐛 Describe the bug
When using torch.autocast, torch.sparse.mm(sparse_csr_tensor, dense_tensor) fails on the gradient computation with an unhelpful error. Half precision matrix multiplication with csr tensors was completed here https://github.com/pytorch/pytorch/issues/41069.
Simple reproduction:
```
weight = torch.rand(5, 4, device = "cuda", dtype = torch.float32)
weight = (weight * (weight < 0.2)).to_sparse_csr().requires_grad_(True)
inp = torch.rand(4, 3, device = "cuda", dtype = torch.float32, requires_grad = False)
with torch.autocast("cuda", dtype = torch.float16, enabled = True):
loss = torch.sparse.mm(weight, inp).sum()
loss.backward()
```
```
RuntimeError: sampled_addmm: Expected mat1 and mat2 to have the same dtype, but got Half and Float
```
Furthermore, if one tries to convert to half precision manually like below, they receive:
```
weight = torch.rand(5, 4, device = "cuda", dtype = torch.float32)
weight = (weight * (weight < 0.2)).to_sparse_csr().requires_grad_(True).half()
inp = torch.rand(4, 3, device = "cuda", dtype = torch.float32, requires_grad = False).half()
loss = torch.sparse.mm(weight, inp).sum()
loss.backward()
```
```
RuntimeError: "sampled_addmm_out_sparse_csr" not implemented for 'Half'
```
### Versions
torch==2.6.0.dev20241231+cu124
and
torch==2.5.1 (with cu124)
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip
| true
|
2,764,663,831
|
[Mac/M1] torch.compile() -- expm1 returns an inaccurate result compared to the interpreted version
|
dcci
|
open
|
[
"oncall: pt2",
"oncall: cpu inductor"
] | 2
|
MEMBER
|
### 🐛 Describe the bug
Input:
```
davidino@davidino-mbp pytorch % cat /tmp/repro.py
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = torch.floor(x)
x = torch.angle(x)
x = torch.sin(x)
s = torch.positive(x)
return torch.expm1(x)
func = Model().to('cpu')
x = torch.tensor([ 0.7076383233070374, 0.2585877180099487, -0.1782233268022537,
-0.0771917924284935, -1.8218737840652466, -0.6442450284957886,
-1.0207887887954712, 0.7611123919487000, 0.9056779742240906,
1.7948490381240845])
pata = func(x.clone())
print(pata)
func1 = torch.compile(func, fullgraph=True)
tino = func1(x.clone())
print(tino)
print(torch.allclose(pata, tino, equal_nan=True))
print(torch.__version__)
```
Output:
```
davidino@davidino-mbp pytorch % python /tmp/repro.py
tensor([ 0.0000e+00, 0.0000e+00, -8.7423e-08, -8.7423e-08, -8.7423e-08,
-8.7423e-08, -8.7423e-08, 0.0000e+00, 0.0000e+00, 0.0000e+00])
tensor([ 0.0000e+00, 0.0000e+00, -5.9605e-08, -5.9605e-08, -5.9605e-08,
-5.9605e-08, -5.9605e-08, 0.0000e+00, 0.0000e+00, 0.0000e+00])
False
2.6.0a0+gitf3e5078
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gitf3e5078
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.27.8
Libc version: N/A
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 08:22:19) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.7.0
[pip3] optree==0.13.1
[pip3] torch==2.6.0a0+gitf3e5078
[conda] numpy 1.26.4 py312h7f4fdc5_0
[conda] numpy-base 1.26.4 py312he047099_0
[conda] numpydoc 1.7.0 py312hca03da5_0
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0a0+gitf3e5078 dev_0 <develop>
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen @chauhang @penguinwu
| true
|
2,764,614,180
|
[ROCm] Print amdgpu info on bare metal for CI runners
|
jithunnair-amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4
|
COLLABORATOR
|
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,764,601,195
|
cpp_extension.py expects an integer on CUDA_ARCH, failing with Grace Hopper.
|
surak
|
open
|
[
"module: cpp-extensions",
"module: cuda",
"triaged"
] | 2
|
NONE
|
### 🐛 Describe the bug
Grace hopper reports as 9.0a, not 9.0, and cpp_extension.py will bark when it expects an integer as the second part of it on autodetect.
The current workaround is to set `TORCH_CUDA_ARCH_LIST="9.0a"` while building it.
```
torch/utils/cpp_extension.py",
line 1972, in _get_cuda_arch_flags
supported_sm = [int(arch.split('_')[1])
^^^^^^^^^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: '90a'
```
### Versions
2.5.1
cc @malfet @zou3519 @xmfan @ptrblck @msaroufim @eqy
| true
|
2,764,575,796
|
[ONNX] Documentation describe the metadata stored in exported models
|
justinchuby
|
open
|
[
"module: onnx",
"triaged"
] | 2
|
COLLABORATOR
| null | true
|
2,764,557,563
|
CheckpointError with torch.compile + checkpointing + DDP
|
TidalPaladin
|
closed
|
[
"oncall: distributed",
"module: activation checkpointing",
"triaged",
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
In instances where torch.compile is combined with DDP and checkpointing, the following error is raised:
```
torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: A different number of tensors was saved during the original forward and recomputation.
```
I have only been able to reproduce the error when all three of these factors are present. Additionally, there seems to be a dependency on the model dimension (see example below). I originally reproduced this under Pytorch Lightning and distilled that code down to this minimal example. Tested on RTX3090 GPU.
```python
import os
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
from torch import Tensor
from torch.utils.checkpoint import checkpoint
from torch.utils.data import DataLoader, TensorDataset
DIM = 256 # Success at 128, failure at 256
SEQ_LEN = 32
@torch.compile(fullgraph=True)
def mlp_forward(
x: Tensor,
w1: Tensor,
w2: Tensor,
b1: Tensor | None = None,
b2: Tensor | None = None,
) -> Tensor:
y = F.linear(x, w1, b1)
y = F.relu(y)
y = F.linear(y, w2, b2)
return y
class MLP(nn.Module):
def __init__(
self,
in_features: int,
hidden_features: int,
out_features: int,
):
super().__init__()
self.checkpoint = True
self.w_in = nn.Parameter(torch.randn(hidden_features, in_features))
self.w_out = nn.Parameter(torch.randn(out_features, hidden_features))
self.b_in = nn.Parameter(torch.randn(hidden_features))
self.b_out = nn.Parameter(torch.randn(out_features))
def forward(self, x: Tensor) -> Tensor:
if self.checkpoint:
result = checkpoint(
mlp_forward,
x,
self.w_in,
self.w_out,
self.b_in,
self.b_out,
use_reentrant=False,
)
else:
result = mlp_forward(x, self.w_in, self.w_out, self.b_in, self.b_out)
assert isinstance(result, Tensor)
return result
def main(ddp=True):
print(f"Running with DDP: {ddp}, DIM: {DIM}, SEQ_LEN: {SEQ_LEN}")
x = torch.randn(100, SEQ_LEN, DIM)
y = torch.zeros(100)
dataset = TensorDataset(x, y)
dataloader = DataLoader(dataset, batch_size=10)
model = MLP(DIM, 4 * DIM, DIM)
if ddp:
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
dist.init_process_group(backend="nccl", world_size=1, rank=0)
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3, weight_decay=0.01)
device = torch.device("cuda:0")
model = model.to(device)
if ddp:
model = nn.parallel.DistributedDataParallel(model)
model.train()
try:
for batch in dataloader:
x, y = batch
x = x.to(device)
optimizer.zero_grad()
output = model(x)
loss = output.sum()
loss.backward()
optimizer.step()
finally:
if ddp:
dist.destroy_process_group()
print("Success")
if __name__ == "__main__":
main(ddp=True) # Fails
# Running first without DDP followed by DDP makes the DDP version work.
# Maybe triggering compiles outside DDP is key?
# main(ddp=False)
# main(ddp=True)
```
Fails with
```
Running with DDP: True, DIM: 256, SEQ_LEN: 32
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/tidal/Documents/mit-ub/test.py", line 101, in <module>
[rank0]: main(ddp=True) # Fails
[rank0]: ^^^^^^^^^^^^^^
[rank0]: File "/home/tidal/Documents/mit-ub/test.py", line 91, in main
[rank0]: loss.backward()
[rank0]: File "/home/tidal/.local/share/pdm/venvs/mit-ub-7pzcQwz--mit_ub/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/home/tidal/.local/share/pdm/venvs/mit-ub-7pzcQwz--mit_ub/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/home/tidal/.local/share/pdm/venvs/mit-ub-7pzcQwz--mit_ub/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tidal/.local/share/pdm/venvs/mit-ub-7pzcQwz--mit_ub/lib/python3.11/site-packages/torch/autograd/function.py", line 307, in apply
[rank0]: return user_fn(self, *args)
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tidal/.local/share/pdm/venvs/mit-ub-7pzcQwz--mit_ub/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1740, in backward
[rank0]: ctx_saved_tensors = ctx.saved_tensors
[rank0]: ^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tidal/.local/share/pdm/venvs/mit-ub-7pzcQwz--mit_ub/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 1129, in unpack_hook
[rank0]: frame.check_recomputed_tensors_match(gid)
[rank0]: File "/home/tidal/.local/share/pdm/venvs/mit-ub-7pzcQwz--mit_ub/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 865, in check_recomputed_tensors_match
[rank0]: raise CheckpointError(
[rank0]: torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: A different number of tensors was saved during the original forward and recomputation.
[rank0]: Number of tensors saved during forward: 8
[rank0]: Number of tensors saved during recomputation: 4
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.11.11 (main, Dec 4 2024, 08:55:08) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7352 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 71%
CPU max MHz: 2300.0000
CPU min MHz: 1500.0000
BogoMIPS: 4600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch_optimizer==3.3.2
[pip3] torch==2.5.1
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
```
Also reproduced on this machine:
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Artix Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.3-dirty
Libc version: glibc-2.40
Python version: 3.11.9 (main, May 14 2024, 22:54:14) [GCC 14.1.1 20240507] (64-bit runtime)
Python platform: Linux-6.10.8-artix1-1-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3960X 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 66%
CPU max MHz: 4568.1641
CPU min MHz: 2200.0000
BogoMIPS: 7604.05
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.5.0.post0
[pip3] pytorch_optimizer==3.3.2
[pip3] torch==2.5.1
[pip3] torchmetrics==1.6.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @soulitzer @chauhang @penguinwu
| true
|
2,764,440,663
|
[Intel XPU] enable kineto for XPU Windows.
|
xuhancn
|
closed
|
[
"module: windows",
"triaged",
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/xpu",
"module: xpu"
] | 7
|
COLLABORATOR
|
This PR will turn on `kineto` on Windowx XPU wheel build.
For `kineto` on Windows XPU, the build time dependencies list:
1. Intel PTI, it contained by oneAPI 2025+.
2. Level zero SDK: https://github.com/oneapi-src/level-zero/releases/download/v1.14.0/level-zero-sdk_1.14.0.zip
**Note:**
We need to manual setup level zero SDK on build time, so we will turn off kineto build on Windows XPU by default. It is in order to avoid developer occurred build issue.
After add level zero SDK include path to `INCLUDE` env_var path. We can add an env_var `XPU_ENABLE_KINETO` to turn on it.
For runtime dependency:
1. Intel-pti pipy package. @chuanqi129 will follow up on further PR.
Local tested the nightly binary:
<img width="1909" alt="image" src="https://github.com/user-attachments/assets/7dfaa7bc-e8ed-40b8-bc71-f91a3df3b95f" />
TODO: @chuanqi129 will submit a following PR to add `intel-pti` as dependency and turn on env_var `XPU_ENABLE_KINETO` for nightly build.
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,764,425,084
|
Training fails with Torch 2.1.0 on Nvidia Jetpack 5.1.2
|
mfatih7
|
open
|
[
"triaged",
"module: jetson"
] | 0
|
NONE
|
### 🐛 Describe the bug
Hello
We are trying to run a training on Nvidia Jetson devices with compute capabilities 7.2 and 8.7.
The system properties are as follows:
```
Python 3.8
Torch 2.1.0
Torchvision 0.16.2
CUDA 11.4
Nvidia Jetpack 5.1.2
Ubuntu 20.04
```
At the begining of a simple MNIST training, while executing `loss.backward()` we get the error below:
```
File "/mnt/nvme/.venvs/venv3_8/lib/python3.8/site-packages/torch/autograd/__init__.py", line 204, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Event device type CUDA does not match blocking stream's device type CPU.
```
The error occurs when we use an environment in which Torch is added using the [.whl from Jetson Zoo](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048).
Eventhough we build our own torch .whl according to the [build file in Jetson containers](https://github.com/dusty-nv/jetson-containers/blob/master/packages/pytorch/build.sh) we get the same error.
When we use the same scripts to build torch for CUDA 12.2 and run the same simple MNIST training we do not get the error.
I appreciate any help.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git7bcf7da
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 7 2024, 13:10:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.120-tegra-aarch64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 3
Vendor ID: ARM
Model: 1
Model name: ARMv8 Processor rev 1 (v8l)
Stepping: r0p1
CPU max MHz: 2201.6001
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 3 MiB
L3 cache: 6 MiB
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] onnx==1.17.0
[pip3] torch==2.1.0a0+git7bcf7da
[pip3] torchaudio==2.1.0+6ea1133
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.16.2+c6f3977
[conda] Could not collect
cc @ptrblck @puririshi98
| true
|
2,764,390,427
|
if pytorch wheel package support avx512?
|
risemeup1
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
“My CPU system supports AVX512, and I want to use a PyTorch package that supports AVX512. Which one should I choose, or do I have to build from source?”
### Versions
....
| true
|
2,764,386,614
|
Is the page 'PyTorch ONNX Exporter Code Reviews and Duty Rotation' of wiki still in use?
|
dune0310421
|
closed
|
[
"module: onnx",
"triaged"
] | 2
|
NONE
|
Hello everyone, I'm a PhD student who is interested at the governance mechanism of PyTorch. I noticed that there is a page 'PyTorch ONNX Exporter Code Reviews and Duty Rotation' in PyTorch wiki, which hasn't been modified for three years. Could you please let me know if this page is still in use? Additionally, I'm wondering if just the module 'ONNX Exporter' has a detailed code review duty rotation mechanism.
| true
|
2,764,386,549
|
Enable mkldnn pattern matcher tests for BF16 on AArch64
|
Mousius
|
closed
|
[
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/linux-aarch64"
] | 11
|
CONTRIBUTOR
|
Fixes #143146
cc @malfet @snadampal @milpuz01 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,764,271,329
|
logaddexp fails on complex tensors in torch.compile
|
maybeLee
|
closed
|
[
"triaged",
"module: complex",
"oncall: pt2",
"module: inductor"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When using logaddexp to perform computation on complex tensors, this API works fine under eager mode but it fails under torch.compile with the following error message:
```
NameError: name 'nanj' is not defined. Did you mean: 'nan'?
```
Here is the code to reproduce:
```
import torch
input = torch.tensor([1.7641+1.j])
other = torch.tensor([0.4002+2.j])
eager_res = torch.logaddexp(input,other)
import numpy as np
np_res = np.log(np.exp(1.7641+1.j) + np.exp(0.4002+2.j))
print(f"eager_res: {eager_res}, np_res: {np_res}") # eager_res: tensor([1.9110+1.1868j]), np_res: (1.9110434644951153+1.1868173945582121j)
compiled_res = torch.compile(torch.logaddexp)(input,other) # NameError: name 'nanj' is not defined. Did you mean: 'nan'?
```
It's weird that `nanj` occurs because the input value seems normal. Besides, the `log` operation is not expected to receive negative value.
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gite15442a
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gite15442a
[pip3] triton==3.1.0
[conda] numpy 1.26.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gite15442a pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @desertfire @aakhundov @rgommers
| true
|
2,764,255,902
|
[ROCm] Add miopen_batch_norm to meta_registrations to fix AOTI issue
|
pytorchbot
|
closed
|
[
"module: rocm",
"open source",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
Currently the upstream example for AOTI usage breaks on ROCm (https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html)
```
File "/root/upstream/torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: unsupported operator: aten.miopen_batch_norm.default (see https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0 for how to fix)
from user code:
File "/root/vision/torchvision/models/resnet.py", line 285, in forward
return self._forward_impl(x)
File "/root/vision/torchvision/models/resnet.py", line 269, in _forward_impl
x = self.bn1(x)
```
This PR adds a meta_registration for miopen_batch_norm to resolve this issue
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,764,255,489
|
[ROCm] Guard triton backend call around cuda.is_available
|
pytorchbot
|
closed
|
[
"module: rocm",
"open source",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 1
|
COLLABORATOR
|
To resolve: https://github.com/pytorch/test-infra/issues/6082
Calling into Triton's get_backend_options will initialise CUDA and break CPU-only environments that may have hip installed.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
| true
|
2,764,253,966
|
Respect ROCR_VISIBLE_DEVICES on AMD GPU device discovery
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
Reland of #140320 after failing test on trunk. Fixes potential environment clobbering in test, makes ROCr+HIP devices (if specified together) more robust to index errors.
Fixes #140318
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,764,139,679
|
torch.cuda.empty_cache() causes extra memory usage on 'cuda:0'
|
JimmyTauH
|
open
|
[
"module: cuda",
"triaged",
"module: CUDACachingAllocator"
] | 2
|
NONE
|
### 🐛 Describe the bug
# Issue Description:
When utilizing PyTorch with a specific CUDA device (in this case, 'cuda:8'), calling `torch.cuda.empty_cache()` unexpectedly results in additional memory allocation on 'cuda:0', approximately 255MB. This behavior is contrary to expectations, as the operation should ideally only affect the memory cache of the specified device ('cuda:8') and not impact other CUDA devices.
# Code
```python
import numpy as np
import torch
a = np.ones(100)
b = torch.tensor(a).to('cuda:8')
torch.cuda.empty_cache()
# The previous behavior is normal, only occupying the video memory on cuda:8.
del b
torch.cuda.empty_cache() #At this time, about 255MB of video memory on cuda:0 is occupied.
```
# Implications in Multi-GPU Clusters:
This characteristic/bug can pose significant challenges in multi-GPU clusters, especially in shared environments among multiple users. The unintended memory allocation on 'cuda:0' can lead to its memory being exhausted, thereby preventing all users from performing their tasks effectively.
# Environment Details:
- **NVIDIA-SMI Version**: 550.54.14
- **Driver Version**: 550.54.14
- **CUDA Version**: 12.4
- **Hardware**: NVIDIA GeForce RTX 3090
### Versions
PyTorch version: 2.4.1.post303
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Anaconda gcc) 11.2.0
Clang version: Could not collect
CMake version: version 3.30.0-rc4
Libc version: glibc-2.35
Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
GPU 8: NVIDIA GeForce RTX 3090
GPU 9: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.54.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 104
On-line CPU(s) list: 0-103
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 26
Socket(s): 2
Stepping: 7
CPU max MHz: 3800.0000
CPU min MHz: 1200.0000
BogoMIPS: 5000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.6 MiB (52 instances)
L1i cache: 1.6 MiB (52 instances)
L2 cache: 52 MiB (52 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-25,52-77
NUMA node1 CPU(s): 26-51,78-103
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] gpytorch==1.13
[pip3] numpy==1.26.4
[pip3] numpy-groupies==0.10.2
[pip3] numpyro==0.13.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] nvidia-nvjitlink-cu12==12.3.101
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-ignite==0.5.0.post2
[pip3] pytorch-lightning==2.2.0.post0
[pip3] torch==2.4.1.post303
[pip3] torch-geometric==2.6.1
[pip3] torchaudio==2.4.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.19.1
[pip3] triton==2.2.0
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.3.101 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] cudnn 9.3.0.75 cuda11.8 nvidia
[conda] gpytorch 1.13 pypi_0 pypi
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.4.107 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libmagma 2.8.0 hfdb99dd_0 conda-forge
[conda] libmagma_sparse 2.8.0 h9ddd185_0 conda-forge
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] libopenvino-pytorch-frontend 2024.4.0 h5888daf_2 conda-forge
[conda] libtorch 2.4.1 cuda118_h232d35b_303 conda-forge
[conda] mkl 2024.1.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310ha3dbc2a_1 conda-forge
[conda] mkl_random 1.2.5 py310hbd113e2_1 conda-forge
[conda] nccl 2.23.4.1 h03a54cd_2 conda-forge
[conda] numpy 1.25.2 pypi_0 pypi
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] numpy-groupies 0.10.2 pypi_0 pypi
[conda] numpyro 0.13.2 pyhd8ed1ab_0 conda-forge
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.19.3 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.3.101 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pyg 2.6.1 py310_torch_2.4.0_cu121 pyg
[conda] pytorch 2.4.1 cuda118_py310h8b36b8a_303 conda-forge
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-ignite 0.5.0.post2 pypi_0 pypi
[conda] pytorch-lightning 2.2.0.post0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.1 py310_cu121 pytorch
[conda] torchmetrics 1.3.1 pypi_0 pypi
[conda] torchtriton 2.2.0 py310 pytorch
[conda] torchvision 0.19.1 py310_cu121 pytorch
[conda] triton 2.2.0 pypi_0 pypi
cc @ptrblck @msaroufim @eqy
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.