id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
listlengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,862,008,409
|
[dynamo] add generic graph break hints
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compile ux"
] | 3
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147912
* #147872
* #147494
* __->__ #147429
* #147385
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,861,981,242
|
Add a config to allow print and generate all recompile reasons and not stop at first.
|
laithsakka
|
open
|
[
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025",
"module: compile ux"
] | 3
|
CONTRIBUTOR
|
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
2,861,979,964
|
[draft_export] only clear pending unbacked symbols for overwritten kernels
|
pianpwk
|
closed
|
[
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
This was wrong, we were doing this in all cases
| true
|
2,861,967,356
|
DISABLED test_complex_data_dependent_expr (__main__.TestDraftExport)
|
pytorch-bot[bot]
|
closed
|
[
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export"
] | 18
|
NONE
|
Platforms: asan, linux, rocm, slow, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_complex_data_dependent_expr&suite=TestDraftExport&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/37423150587).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 10 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_complex_data_dependent_expr`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/export/test_draft_export.py", line 287, in test_complex_data_dependent_expr
self.assertTrue(len(report.expressions_created) >= 4)
File "/opt/conda/envs/py_3.9/lib/python3.9/unittest/case.py", line 688, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_CROSSREF=1 python test/export/test_draft_export.py TestDraftExport.test_complex_data_dependent_expr
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `export/test_draft_export.py`
cc @clee2000 @wdvr @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,861,962,998
|
[dynamo][codegen] Implement CSE for pre-graph graph-arg bytecode reconstruction
|
anijain2305
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
This reduces fixed overhead seen in a few internal models.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,861,958,227
|
To enable NCCL communication to support uint64 tensors
|
wynneyin
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 5
|
NONE
|
In the field of cryptography and privacy computing, is crucial. During our work with PyTorch, we discovered that NCCL communication does not support . Therefore, in this modification, we have enabled NCCL to support tensor types.
| true
|
2,861,923,950
|
[audio hash update] update the pinned audio hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 6
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
| true
|
2,861,923,740
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 71
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,861,914,648
|
type `fully_shard` so that the return value can be chained with typing enabled
|
xunnanxu
|
closed
|
[
"oncall: distributed",
"release notes: distributed (fsdp)",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147421
* #147420
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,861,914,583
|
capture the return value in the contract typing
|
xunnanxu
|
closed
|
[
"oncall: distributed",
"ciflow/trunk",
"release notes: distributed (fsdp2)"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147421
* __->__ #147420
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
Differential Revision: [D69820400](https://our.internmc.facebook.com/intern/diff/D69820400)
| true
|
2,861,897,282
|
no opt - vanila _copy only
|
sevenEng
|
closed
|
[
"fb-exported",
"release notes: cuda"
] | 3
|
NONE
|
Summary:
Remove current optimisation from prod, to measure the baseline with `_copy` each tensor in a serial fashion, and the lift from current optimization
Noticed that current optimization is gated by dimension size check (<=4), we can see when dimension size (>=5), prod behaviour degrades to baseline
Test Plan:
### using benchmark script in D69811003 to get the results
P1735404385
{F1975247324}
Differential Revision: D69811002
| true
|
2,861,893,965
|
Add cmake hints to USE_SYSTEM_NVTX for nvtx3 include dir
|
xwang233
|
closed
|
[
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"release notes: build",
"topic: build"
] | 11
|
COLLABORATOR
|
per title
sometimes, it's hard for cmake to find NVTX3 without the cuda include path hint
| true
|
2,861,872,275
|
more dist ops in non strict
|
avikchaudhuri
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 7
|
CONTRIBUTOR
|
Summary: Previously we added support for `all_reduce` to non strict. This PR extends this support to other non-functional collectives that are remapped in Dynamo: `all_gather`, `all_gather_into_tensor`, `all_to_all_single`, `reduce_scatter_tensor`.
Test Plan: added unit tests
Differential Revision: D69813991
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,861,859,188
|
[Triton][Inductor] Infer Boolean Types
|
csteegz
|
open
|
[
"fb-exported",
"Stale",
"topic: not user facing",
"module: inductor"
] | 11
|
NONE
|
Summary: PT2 compiler has issues with boolean types in wrapped functions. There is some code to try to infer if an unknown type is an i32 or i64, but that causes a failure when it tries to compare with a boolean. Add explicit tests to determine if data is `i1`.
Test Plan:
Added test to test_triton_kernels.py
I'm having trouble figuring out how to run locally but expect the unit test to work with existing infrastracture.
I have run locally and verified a wrapped triton kernel can be compiled.
Differential Revision: D69805822
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,861,816,256
|
Disable dict_tag optimization in ancestors if the ancestor is not common
|
isuruf
|
open
|
[
"open source",
"Stale",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147415
* #147414
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,861,816,132
|
Keep a handle to parent instead of root in GuardManagers
|
isuruf
|
open
|
[
"open source",
"Stale",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147415
* __->__ #147414
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,861,801,967
|
[util] fetch logical count cpu
|
yangw-dev
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5
|
CONTRIBUTOR
|
To match with Vcpu count with aws:
after (96), before (48)
Instance Ref: https://instances.vantage.sh/aws/ec2/g4dn.metal
before: https://hud.pytorch.org/utilization/13377376406/37360984234/1
after: https://hud.pytorch.org/utilization/13401543806/37435031356/1
| true
|
2,861,771,625
|
[ROCm][TunableOp] Fix TunableOp warmup environment variable.
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 3
|
COLLABORATOR
|
This PR corrects the behavior of the TunableOp warmup variables:
```
PYTORCH_TUNABLEOP_MAX_WARMUP_DURATION_MS
PYTORCH_TUNABLEOP_MAX_WARMUP_ITERATIONS
```
See the updated comments which describe how the environment variables are intended to work. Previously, if you only set one of the two environment variables the warmup iters would always be zero.
Manually tested the four possible combinations to make sure things still behavior as intended.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang
| true
|
2,861,758,560
|
[caffe2] disable warning for unused arguments
|
rmaz
|
closed
|
[
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 4
|
CONTRIBUTOR
|
Summary: Disable warnings on unused command line arguments for ukernels_asm.
Test Plan:
On top of D69602077:
```
$ buck2 build --flagfile fbsource//xplat/mode/arstudio/auto.py fbsource//xplat/caffe2/aten/src/ATen/native/quantized/cpu/qnnpack:ukernels_asmAppleMac
```
Differential Revision: D69807977
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,861,730,268
|
Small scheduler refactor
|
exclamaforte
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
- ~Simplify speedup/slowdown error message~
- Make possible fusions into a default dict
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,861,725,454
|
[BE] remove sysconfig.get_config_var("LIBDIR") from cuda lib paths
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Summary: I think the path is not needed anymore. It was added in https://github.com/pytorch/pytorch/pull/126408, but it has been a while since then. See if CI complains.
Differential Revision: D69573185
See also https://github.com/pytorch/pytorch/pull/147158
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,861,718,042
|
Validate sparse tensors constructed via legacy constructor
|
mikaylagawarecki
|
closed
|
[
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
EDIT: this is not an encompassing fix because of legacy_load, will redo
provided exploit now errors with
RuntimeError: size is inconsistent with indices: for dim 0, size is 1 but found index 4702111234474983745
during torch.load
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147408
| true
|
2,861,712,263
|
[ONNX] Pick up missing types in dynamic shapes renaming
|
titaiwangms
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements",
"merging"
] | 12
|
COLLABORATOR
|
Found in `_check_dynamic_shapes` that int and None type are valid inputs of dynamic_shapes.
This PR adds the support on these two types and add the tests to guard the sync of ONNX flatten logic and the one in expor.t
| true
|
2,861,697,219
|
fix pt2e block wise quantization unit test
|
cccclai
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing"
] | 4
|
CONTRIBUTOR
|
Differential Revision: D69806596
https://github.com/pytorch/pytorch/pull/146946 breaks the unit test, because the quant nodes are folded by default now.
| true
|
2,861,691,504
|
dynamo should recompile with constant tensors that use ambient device guards
|
jamesjwu
|
closed
|
[
"triaged",
"actionable",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher",
"dynamo-must-fix"
] | 3
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Here's a simple unit test repro:
```python
@torch.compile
def f():
y = torch.tensor([0, 1024, 2048, 3072, 4096, 5120, 6144, 7168, 8192], dtype = torch.int32, device = "cuda")
return (y,)
index = 0
with torch.cuda._DeviceGuard(device):
torch.cuda.set_device(device)
result = f()
assert(result[0].device == torch.device("cuda:0"))
index = 1
with torch.cuda._DeviceGuard(index):
torch.cuda.set_device(index)
result = f()
assert(result[0].device == torch.device("cuda:1")) # Fails
```
When creating a constant tensor with `torch.tensor`, Dynamo should guard on the specific device index of the tensor being created, because the output of `f()` should always return a tensor of the current cuda device in eager.
However, AOTAutograd embeds constants into the graph, so guards need to be added so that dynamo correctly recompiles when the device guard changes.
This also affects AOTAutogradCache. If you run the same example, but with a `torch._dynamo.reset()` in between, while enabling FXGraphCache and AOTAutogradCache, you'll get a cache hit and a similar issue.
There are a bunch of possible fixes here: AOTAutograd should probably add a guard on the ambient device index when converting a tensor into a constant, and it should also be part of the cache key. Theoretically, when creating the constant tensor, AOTAutograd must use *something* on the dynamo graph to tell it how to create the tensor. Will dig in more.
### Versions
latest torch nightly
cc @chauhang @penguinwu @zou3519 @bdhirsh
| true
|
2,861,665,942
|
[NOT READY][dynamo] CSE for grapharg sources
|
anijain2305
|
closed
|
[
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,861,626,916
|
cpp_wrapper: reduce memory usage by removing unneeded temporaries
|
benjaminglass1
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147225
* #146706
* __->__ #147403
This PR contains a set of interrelated changes, listed below, with the upshot that compiled model memory usage in `cpp_wrapper` mode is now roughly equivalent to the default inductor mode.
Changes:
1. Refactor `reinterpret_view` calls in `cpp_wrapper` to always return a temporary RAII tensor object, rather than saving off a "temporary" tensor handle that persisted through the end of the function. This matches the behavior of the base Python wrapper class, and is responsible for majority of the memory usage reductions.
2. Eliminate nearly all other cases where a "temporary" tensor handle was saved off (with the exception of one or two places where the tensor would immediately be destroyed by going out-of-scope). This necessitated some ugly-looking code to handle `Optional[Tensor]` and `Optional[Sequence[Any]]`, since `Optional` is passed by pointer into the C-shim functions (making passing temporary objects difficult). This code is justified by the fact that it only appears in controlled circumstances that we auto-generate, so there are minimal user-facing footguns.
3. Delete the list containing the input tensors to the `cpp_wrapper` main function after casting them to `AtenTensorHandle` objects, which have an internal reference count keeping them alive.
The [TorchInductor benchmark](https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Sat%2C%2015%20Feb%202025%2018%3A38%3A08%20GMT&stopTime=Sat%2C%2022%20Feb%202025%2018%3A38%3A08%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=gh/benjaminglass1/73/head&lCommit=4d5edaf67e80ca9ca36d301af1ded13967a04790&rBranch=main&rCommit=e1bf892d9004a4dba0748d0eda5c3b4eced0ea70) I ran shows the increased memory compression.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
Differential Revision: [D70648897](https://our.internmc.facebook.com/intern/diff/D70648897)
| true
|
2,861,592,442
|
[dynamic shapes][export] real-tensor tracing fails, due to bad decomposition path
|
pianpwk
|
closed
|
[
"oncall: pt2",
"module: dynamic shapes",
"export-triaged",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Minimal repro:
```
def test_index(self):
class M(torch.nn.Module):
def forward(self, x, mask, weight, bias):
masked = x[mask != 0, :, :]
return torch.nn.functional.linear(masked, weight, bias)
x = torch.zeros(10)
inp = (torch.randn(10, 8, 7), x, torch.randn(25, 7), torch.randn(25))
ep, report = draft_export(M(), inp)
Error:
File "/data/users/pianpwk/pytorch/torch/_prims/__init__.py", line 1532, in _split_dim_meta
inner_length = a.shape[dim] // outer_length
ZeroDivisionError: integer division or modulo by zero
```
How we get there is a bit complicated, but basically the steps are:
1. `masked` has a shape of [u0, 8, 7]
2. the linear call, after the matmul, produces a call to `_reshape_view_helper`, which tries to reshape a tensor of size [8*u0, 25] into [u0, 8, 25]. Regardless of how we get there, what's important is that the `_reshape_view_helper()` decomposition has some logic for how the reshape is implemented, and every op traced in the decomposition goes to FakeTensor dispatch.
3. Because we're running draft export, which uses real-tensor tracing, everything that goes to FakeTensor dispatch also has a corresponding real-tensor call, to store the real values.
Now here things are a bit problematic, because in `_reshape_view_helper`, we have an input with fake shape [8*u0, 25], but real value [0, 25]. If we follow the decomposition logic using the fake shape, we end up here: https://github.com/pytorch/pytorch/blob/74682e859533d3751087f8cd1a3abe61a2ba40c4/torch/_refs/__init__.py#L3811, which is where the division by zero error happens. This makes sense, because with real-tensor tracing we're trying to split with an `length` of 0, and that's invalid.
However if we had followed the real shape in the decomposition logic, we would have known the tensor has 0 elements and we could have broken out early here (though that would have been incorrect for the FakeTensor case): https://github.com/pytorch/pytorch/blob/74682e859533d3751087f8cd1a3abe61a2ba40c4/torch/_refs/__init__.py#L3712-L3714
So I'm not sure what the correct solution is here. Some sketches I can think of are:
1) rewrite the metas to accomodate real-tensor tracing. This might mean checking for real-tensor values, or not using split_dim, but not sure what the implications of this are.
2) have real-tensor & fake-tensor tracing follow independent decomposition paths, but I feel this is a non-solution, mainly because a lot of data-dependent errors originate from tracing decompositions, and needing the real values to decide how to decompose during fake tensor tracing.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git5d675de
Is debug build: False
CUDA used to build PyTorch: 12.0
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.34
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-0_fbk12_hardened_11583_g0bef9520ca2b-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
Nvidia driver version: 525.105.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 92
On-line CPU(s) list: 0-91
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 92
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 5.8 MiB (92 instances)
L1i cache: 5.8 MiB (92 instances)
L2 cache: 46 MiB (92 instances)
L3 cache: 1.4 GiB (92 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-91
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] adam-atan2-pytorch==0.1.1
[pip3] alphafold3-pytorch==0.6.6
[pip3] bert_pytorch==0.0.1a4
[pip3] ema-pytorch==0.7.3
[pip3] executorch==0.4.0.dev20240807+cpu
[pip3] flake8==7.1.1
[pip3] frame-averaging-pytorch==0.1.2
[pip3] lion-pytorch==0.2.2
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxruntime==1.18.0
[pip3] onnxscript==0.1.0.dev20250122
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.13.1
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-lightning==2.0.7
[pip3] pytorch_sphinx_theme==0.0.24
[pip3] pytorch-triton==3.0.0+45fff310c8
[pip3] rotary-embedding-torch==0.8.5
[pip3] torch==2.7.0a0+git5d675de
[pip3] torch_geometric==2.4.0
[pip3] torch-mlir==20241017.255
[pip3] torch-stoi==0.2.1
[pip3] torch_tensorrt==2.6.0.dev20241007+cu124
[pip3] torchao==0.5.0
[pip3] torchaudio==2.6.0a0+36815ef
[pip3] torchdiffeq==0.2.4
[pip3] torchmetrics==1.0.3
[pip3] torchrec==0.9.0a0+5e30669
[pip3] torchsde==0.2.6
[pip3] torchsr==1.0.4
[pip3] torchtext==0.18.0
[pip3] torchtune==0.0.0
[pip3] torchtyping==0.1.5
[pip3] torchvision==0.16.2
[pip3] torchx==0.7.0
[pip3] triton==3.1.0
[conda] adam-atan2-pytorch 0.1.1 pypi_0 pypi
[conda] alphafold3-pytorch 0.6.6 pypi_0 pypi
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] blas 1.0 mkl
[conda] ema-pytorch 0.7.3 pypi_0 pypi
[conda] executorch 0.4.0.dev20240809+cpu pypi_0 pypi
[conda] frame-averaging-pytorch 0.1.2 pypi_0 pypi
[conda] lion-pytorch 0.2.2 pypi_0 pypi
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] open-clip-torch 2.24.0 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] pytorch-lightning 2.0.7 pypi_0 pypi
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] pytorch-triton 3.0.0+45fff310c8 pypi_0 pypi
[conda] pytorch3d 0.7.7 dev_0 <develop>
[conda] rotary-embedding-torch 0.8.5 pypi_0 pypi
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-mlir 20241017.255 pypi_0 pypi
[conda] torch-stoi 0.2.1 pypi_0 pypi
[conda] torch-tensorrt 2.6.0.dev20241007+cu124 pypi_0 pypi
[conda] torchao 0.4.0+gitaccbdba pypi_0 pypi
[conda] torchaudio 2.6.0a0+36815ef dev_0 <develop>
[conda] torchbench 0.1 dev_0 <develop>
[conda] torchdiffeq 0.2.4 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchrec 0.9.0a0+5e30669 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchtext 0.18.0 pypi_0 pypi
[conda] torchtune 0.0.0 pypi_0 pypi
[conda] torchtyping 0.1.5 pypi_0 pypi
[conda] torchvision 0.16.2 pypi_0 pypi
[conda] torchx 0.7.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @ezyang @bobrenjc93 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,861,589,994
|
[ONNX] Create scaffolding for torchlib ops
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"release notes: onnx",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147401
* #147396
* #147392
* #147391
This PR creates the scaffolding for new onnx decomp functions described in https://github.com/pytorch/pytorch/issues/139301. It adds two ops: abs and add, and enables the related tests.
| true
|
2,861,587,997
|
torch.export doesn't provide useful error message when someone uses unrecognized dataclass as input
|
tugsbayasgalan
|
closed
|
[] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
from dataclasses import dataclass
import torch
@dataclass
class MyStaticInput:
int_1: int
int_2: int
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, x1):
return x + x1.int_1 + x1.int_2
torch.export.export(Foo(), (torch.randn(1), MyStaticInput(1, 2)), strict=False)
```
Gives error:
```
File "/data/users/tmanlaibaatar/pytorch/torch/export/graph_signature.py", line 561, in _convert_to_export_graph_signature
_make_argument_spec(node, input_tokens)
File "/data/users/tmanlaibaatar/pytorch/torch/export/graph_signature.py", line 532, in _make_argument_spec
raise AssertionError(
AssertionError: Encountered an unsupported object of type <class '__main__.MyStaticInput'> while writing the metadata for exported program
```
I think it should have error-ed earlier and suggest to mark something as constant or pytree.
### Versions
main
| true
|
2,861,585,608
|
[torchbind] Differentiate ScriptModule and ScriptObject with qualified name
|
ydwu4
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 8
|
CONTRIBUTOR
|
Summary:
This pr add a _is_script_object method to differentiate scriptModule and scriptObject, where the formal inherits from ScriptObject in C++ so they both passes the isinstance(obj, torch.ScriptObject) check.
The qualified name of ScriptObject (i.e. custom class) would starts with "__torch__.torch.classes", this has been a widely used assumption for dealing with custom class across our code base.
Test Plan: Add new test.
Differential Revision: D69685316
| true
|
2,861,581,439
|
Add overflow check for large storage_offsets
|
wdvr
|
open
|
[
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes #145259
This adds two overflow checks to the storage offset calculation @ `aten/src/ATen/native/Resize.h`, avoiding this to crash:
```
python3 -c "import torch; print(torch.as_strided(torch.arange(10), size=(5,), stride=(2,), storage_offset=8170450533120000000))"
```
and avoiding this to return a wrong Tensor:
```
python3 -c "import torch; print(torch.as_strided(torch.arange(10), size=(5,), stride=(2,), storage_offset=2**63-10000))"
```
| true
|
2,861,561,396
|
torch.export needs good API for marking if certain input is constant or not.
|
tugsbayasgalan
|
closed
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
from dataclasses import dataclass
import torch
@dataclass
class MyStaticInput:
int_1: int
int_2: int
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, static):
return x + static.int_1 + static.int_2
from torch.utils._pytree import register_constant
register_constant(MyStaticInput)
torch.export.export(Foo(), (torch.randn(1), MyStaticInput(1, 2)), strict=False)
```
Following errors with
```
ValueError: treespec.unflatten(leaves): `leaves` has length 2 but the spec refers to a pytree that holds 1 items (TreeSpec(tuple, None, [TreeSpec(tuple, None, [*,
TreeSpec(MyStaticInput, ConstantNode(value=MyStaticInput(int_1=1, int_2=2)), [])]),
TreeSpec(dict, [], [])])).
```
This is because register_constant actually makes MyStaticInput into an empty container. I think we need some API to say if this thing is registered as constant, we should have some option to toggle if the constant is leaf or not.
### Versions
Main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
2,861,550,035
|
[ONNX] Refactor dispatcher and registry
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147469
* #147392
* __->__ #147396
This PR sets up the registry to accept onnx decomp functions to be moved into PyTorch (https://github.com/pytorch/pytorch/issues/139301).
The ops from onnx script are currently appended to the registry. When the ops are moved into PyTorch, the moved ops takes precedence because they appear first in the registry list.
After the migration hooks for loading ops from onnx script will be removed.
1. Use a private field `_pt_onnx_signature` to store function signatures to avoid conflicts
2. Update the registry to record the signature in OnnxDecompMeta and update the dispatcher to leverage the data structure
3. Update registry to prepare for onnx op registration, and update the the onnx_impl decorator to support a no_compile option
Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
| true
|
2,861,527,397
|
[Inductor][Triton] Rework casting logic to avoid illegal bitcast
|
alexbaden
|
closed
|
[
"triaged",
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 19
|
COLLABORATOR
|
Triton introduced checks for bitcasts where the casted value does not fit into the casted type (e.g. https://github.com/triton-lang/triton/pull/5926, though in this instance I think the issue is related to the type for the broadcast). Some routines in Inductor now perform illegal bitcasts. I reworked the compare and swap w/ index routine used in sort to remove the illegal bitcast (~~I left the bitcast for now, but I think it could probably be removed assuming the reshape does not change the type~~). The explicit cast is correct, and I don't think there are performance issues, but because the cast on the sum is not a bitcast I suppose there could be.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,861,522,988
|
[ROCm] gfx940 and gfx941 cleanup
|
naromero77amd
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 4
|
COLLABORATOR
|
Removing gfx architectures not supported by ROCm.
NOTE: For users wanting to build PyTorch for gfx archs that are *not* supported by the official wheels on download.pytorch.org, you can build PyTorch from source for your desired gfx arch [using the PYTORCH_ROCM_ARCH env var](https://github.com/pytorch/pytorch/blob/main/README.md#amd-rocm-support).
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,861,515,165
|
[BE] correct docs for clock_rate to MHz, fixes #147098
|
janeyx99
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda",
"topic: docs"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147393
| true
|
2,861,483,634
|
[ONNX] Add scaffolding for onnx decomp and logic for op tests
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147472
* #147469
* __->__ #147392
Create scaffold for onnx op test data and common logic. This PR creates the scaffolding for new onnx decomp functions described in https://github.com/pytorch/pytorch/issues/139301. It adds two ops: abs and add, and enables the related tests.
https://github.com/pytorch/pytorch/issues/139301
| true
|
2,861,483,544
|
[ONNX] Move and improve error reproduction logic in test
|
justinchuby
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: not user facing"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147401
* #147396
* #147392
* __->__ #147391
https://github.com/pytorch/pytorch/issues/139301
| true
|
2,861,372,707
|
[CD] Increase timeout for windows binary builds
|
atalman
|
closed
|
[
"Merged",
"ciflow/binaries",
"topic: not user facing",
"ciflow/nightly"
] | 6
|
CONTRIBUTOR
|
Mitigates https://github.com/pytorch/pytorch/issues/147376
| true
|
2,861,301,084
|
Fix representation of `Lazy*` modules after loading parameters
|
adosar
|
open
|
[
"module: nn",
"triaged"
] | 0
|
NONE
|
### 🐛 Describe the bug
When a `Lazy*` module initializes its parameters by loading the state dict of another module:
```python
import torch
from torch.nn import LazyLinear
x = torch.randn(4, 4)
l1 = LazyLinear(16)
print(l1)
l1(x)
print(l1)
l2 = LazyLinear(16)
l2.load_state_dict(l1.state_dict())
print(l2)
```
the representation of the module isn't updated:
```
LazyLinear(in_features=0, out_features=16, bias=True)
Linear(in_features=4, out_features=16, bias=True)
LazyLinear(in_features=0, out_features=16, bias=True)
```
even if a forward pass is performed (this only changes `LazyLinear` to `Linear` in the representation):
```python
import torch
from torch.nn import LazyLinear
x = torch.randn(4, 4)
l1 = LazyLinear(16)
print(l1)
l1(x)
print(l1)
l2 = LazyLinear(16)
l2.load_state_dict(l1.state_dict())
print(l2)
l2(x)
print(l2)
```
```
LazyLinear(in_features=0, out_features=16, bias=True)
Linear(in_features=4, out_features=16, bias=True)
LazyLinear(in_features=0, out_features=16, bias=True)
Linear(in_features=0, out_features=16, bias=True)
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.31.5
Libc version: glibc-2.36
Python version: 3.11.2 (main, Nov 30 2024, 21:22:50) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.1.0-23-amd64-x86_64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6448H
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 4
Stepping: 8
CPU(s) scaling MHz: 22%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 6 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 256 MiB (128 instances)
L3 cache: 240 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
NUMA node2 CPU(s): 64-95
NUMA node3 CPU(s): 96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.5.1
[pip3] torchmetrics==1.6.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
| true
|
2,861,176,740
|
Revert "Introduce new template heuristic for triton autotune configs"
|
jansel
|
closed
|
[
"module: rocm",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 13
|
CONTRIBUTOR
|
Summary:
This diff reverts D69573225 / https://github.com/pytorch/pytorch/pull/144985
15% cold compile time regression, see https://fb.workplace.com/groups/1075192433118967/permalink/1608559059782299/
Test Plan: NA
Differential Revision: D69790102
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,861,174,237
|
Track test count regressions
|
clee2000
|
open
|
[
"triaged",
"module: devx"
] | 0
|
CONTRIBUTOR
|
As in title, to catch bugs in sharding or if something causes many tests to be deleted (such as a change to the bash scripts that unintentionally stopped testing on mac) or added
cc @ZainRizvi @kit1980 @huydhn
| true
|
2,860,980,414
|
Fix linter warnings
|
ahmadsharif1
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 3
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/pull/145866 accidentally introduced a warning about const casts and also comparison of unsigned long int with signed long int.
This PR fixes both of those warnings.
Tested by running:
```
/usr/local/cuda/bin/nvcc -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_BUILD_MAIN_LIB -DTORCH_CUDA_USE_NVTX3 -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_CUDA -DUSE_CUFILE -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_FLASH_ATTENTION -DUSE_MEM_EFF_ATTENTION -DUSE_NCCL -DUSE_RPC -DUSE_TENSORPIPE -D_FILE_OFFSET_BITS=64 -Dtorch_cuda_EXPORTS -I/home/ahmads/personal/pytorch/build/aten/src -I/home/ahmads/personal/pytorch/aten/src -I/home/ahmads/personal/pytorch/build -I/home/ahmads/personal/pytorch -I/home/ahmads/personal/pytorch/cmake/../third_party/benchmark/include -I/home/ahmads/personal/pytorch/third_party/onnx -I/home/ahmads/personal/pytorch/build/third_party/onnx -I/home/ahmads/personal/pytorch/nlohmann -I/home/ahmads/personal/pytorch/aten/src/THC -I/home/ahmads/personal/pytorch/aten/src/ATen/cuda -I/home/ahmads/personal/pytorch/third_party/fmt/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/include -I/home/ahmads/personal/pytorch/aten/src/ATen/../../../third_party/cutlass/tools/util/include -I/home/ahmads/personal/pytorch/build/caffe2/aten/src -I/home/ahmads/personal/pytorch/aten/src/ATen/.. -I/home/ahmads/personal/pytorch/build/nccl/include -I/home/ahmads/personal/pytorch/c10/cuda/../.. -I/home/ahmads/personal/pytorch/c10/.. -I/home/ahmads/personal/pytorch/third_party/tensorpipe -I/home/ahmads/personal/pytorch/build/third_party/tensorpipe -I/home/ahmads/personal/pytorch/third_party/tensorpipe/third_party/libnop/include -I/home/ahmads/personal/pytorch/torch/csrc/api -I/home/ahmads/personal/pytorch/torch/csrc/api/include -isystem /home/ahmads/personal/pytorch/build/third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/gloo -isystem /home/ahmads/personal/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/ahmads/personal/pytorch/third_party/protobuf/src -isystem /home/ahmads/personal/pytorch/third_party/XNNPACK/include -isystem /home/ahmads/personal/pytorch/third_party/ittapi/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/eigen -isystem /usr/local/cuda/include -isystem /home/ahmads/personal/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /home/ahmads/personal/pytorch/third_party/ideep/include -isystem /home/ahmads/personal/pytorch/INTERFACE -isystem /home/ahmads/personal/pytorch/third_party/nlohmann/include -isystem /home/ahmads/personal/pytorch/third_party/NVTX/c/include -isystem /home/ahmads/personal/pytorch/cmake/../third_party/cudnn_frontend/include -DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS -D_GLIBCXX_USE_CXX11_ABI=1 -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch -gencode arch=compute_90,code=sm_90 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -O3 -DNDEBUG -std=c++17 -Xcompiler=-fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Xcompiler -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-but-set-variable -Wno-maybe-uninitialized -MD -MT caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/SoftMax.cu.o -MF caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/SoftMax.cu.o.d -x cu -c /home/ahmads/personal/pytorch/aten/src/ATen/native/cuda/SoftMax.cu -o caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/SoftMax.cu.o
```
And I got no warnings or errors. Same with `python setup.py develop`
| true
|
2,860,957,940
|
[dynamo] make some more graph break messages readable in English [2/N]
|
williamwen42
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"module: compile ux"
] | 5
|
MEMBER
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147912
* #147872
* #147494
* #147429
* __->__ #147385
This is for "for some large number Z, make sure the error messages are readable English." - beginning to audit all `unimplemented` sites and making sure that all messages are at least English-readable. Hints may not necessarily be provided.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,860,946,681
|
[BE] Fix tensor stub
|
vmoens
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147384
| true
|
2,860,907,285
|
[RFC] dropping CUDA 11.8 support in CI/CD
|
atalman
|
open
|
[
"module: build",
"module: cuda",
"triaged"
] | 1
|
CONTRIBUTOR
|
Related to: https://github.com/pytorch/pytorch/issues/145544
Opening this RFC to discuss dropping of CUDA 11.8 possibility and timeline
For PyTorch Release 2.7 we are proceeding with following configuration:
CUDA 11.8, CUDNN 9.1.0.70 - Same as Previous Release 2.6. No changes to CUDA 11.8 - Legacy version
CUDA 12.6 CUDNN 9.x - Version Released to Pypi - Stable version
CUDA 12.8 CUDNN 9.x - New Experimental version
Proposal is to announce removal of CUDA 11.8 at release 2.7 and drop it for release 2.8. Hence dropping support of 11.8 in nightlies for Mar 2025-Jun 2025.
cc @malfet @seemethere @ptrblck @msaroufim @eqy @tinglvv @nWEIdia
### Versions
2.7-2.8
| true
|
2,860,844,623
|
[ROCm][Windows] Enable torchvision build with ROCm on Windows
|
tvukovic-amd
|
closed
|
[
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/rocm"
] | 9
|
CONTRIBUTOR
|
- Updated HIP flags for Windows (removed non Windows flags on Windows case, added runtime library)
- Set hipcc call for Windows case
- Removed CUDA flags (not used in ROCm) on Windows
- Updated Windows compiler (added case when using ROCm on Windows)
- Fixed path issue in hipify_python
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,860,591,659
|
Miss comm.reduce_add_coalesced in communication collectives of cuda
|
FFFrog
|
closed
|
[
"oncall: distributed",
"module: docs"
] | 0
|
COLLABORATOR
|
### 📚 The doc issue

https://github.com/pytorch/pytorch/blob/0c8028e877258fd5ef34da4c8d09121cdfc0c9a6/torch/cuda/comm.py#L12-L18
### Suggest a potential alternative/fix
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @svekars @brycebortree @sekyondaMeta @AlannaBurke
| true
|
2,860,558,658
|
[export] fail to export joint graph of a model with tied weights using experimental `_export_forward_backward` API
|
mksit
|
open
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 0
|
NONE
|
### 🐛 Describe the bug
When using `_export_forward_backward`to export the joint graph of a model with tied weights, I've encountered the following error
```
Traceback (most recent call last):
File "/home/mankit/workspace/Chowa/test.py", line 32, in <module>
main()
File "/home/mankit/workspace/Chowa/test.py", line 27, in main
joint_ep = _export_forward_backward(ep, 0)
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/export/experimental/__init__.py", line 58, in _export_forward_backward
ep = _decompose_exported_program(
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/export/exported_program.py", line 742, in _decompose_exported_program
gm, new_graph_signature = _decompose_and_get_gm_with_new_signature_constants(
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/export/exported_program.py", line 505, in _decompose_and_get_gm_with_new_signature_constants
gm, graph_signature = aot_export_module(
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1383, in aot_export_module
fx_g = make_fx(flattened_joint, record_module_stack=True)(*full_args)
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2196, in wrapped
return make_fx_tracer.trace(f, *args)
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2134, in trace
return self._trace_inner(f, *args)
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 2105, in _trace_inner
t = dispatch_trace(
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1138, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1694, in trace
res = super().trace(root, concrete_args)
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
return fn(*args, **kwargs)
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 843, in trace
(self.create_arg(fn(*args)),),
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1193, in wrapped
out = f(*tensors) # type:ignore[call-arg]
File "<string>", line 1, in <lambda>
File "/mnt/data/mksit/anaconda3/envs/chowa/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1371, in flattened_joint
assert (
AssertionError: Found a parameter that did not receive a gradient.
"This is most likely a bug, but if this needs to be supported please comment on this Github issue:
https://github.com/pytorch/pytorch/issues/101192
```
**To reproduce:**
```
import torch
class TestModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.input_embeds = torch.nn.Embedding(50272, 512, padding_idx=1)
self.lm_head = torch.nn.Linear(512, 50272)
self.lm_head.weight = self.input_embeds.weight
def forward(self, x):
x = self.input_embeds(x)
x = self.lm_head(x)
return (x.sum(),)
def main():
mod = TestModel()
x = torch.randint(0, 50272, (16, 1024))
y = mod(x)
print(f"y={y}")
ep = torch.export.export(mod, (x,))
joint_ep = torch.export.experimental._export_forward_backward(ep, 0)
if __name__ == "__main__":
main()
```
### Versions
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
GPU 2: NVIDIA RTX A5000
GPU 3: NVIDIA RTX A5000
GPU 4: NVIDIA RTX A5000
GPU 5: NVIDIA RTX A5000
GPU 6: NVIDIA RTX A5000
GPU 7: NVIDIA RTX A5000
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7453 28-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3488.5249
CPU min MHz: 1500.0000
BogoMIPS: 5500.48
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sev sev_es debug_swap
Virtualisation: AMD-V
L1d cache: 1.8 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 28 MiB (56 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27
NUMA node1 CPU(s): 28-55
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.6.0
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.6.0+cu126
[pip3] torchaudio==2.6.0+cu126
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] torch 2.6.0+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0+cu126 pypi_0 pypi
[conda] torchvision 0.21.0+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,860,536,873
|
Add Missing Communication collectives
|
FFFrog
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 11
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147379
----
- reduce_add_coalesced
| true
|
2,860,513,357
|
[Triton upstream] [Inductor] [ROCm] HSA_STATUS_ERROR_MEMORY_APERTURE_VIOLATION on some inductor UTs
|
jataylo
|
closed
|
[
"module: rocm",
"triaged",
"oncall: pt2",
"upstream triton"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
Platform: ROCm
Triton commit: f73cf3268ef04d862493e0fc1cca5257f2a09346
As seen in https://github.com/pytorch/pytorch/pull/147320 when attempting to bump triton in preparation for 3.3, the latest tip of tree of triton some UTs due to some memory access issue.
Reproducer:
python test/inductor/test_torchinductor.py -k "scatter5_cuda" --verbose
Traceback:
```
test_scatter5_cuda (__main__.GPUTests) ... GPU core dump created: gpucore.1422583
:0:rocdevice.cpp :3018: 62491722575d us: Callback: Queue 0x7fa730c00000 aborting with error : HSA_STATUS_ERROR_MEMORY_APERTURE_VIOLATION: The agent attempted to access memory beyond the largest legal address. code: 0x29
Aborted (core dumped)
```
### Versions
PyTorch: Nightly
Triton: f73cf3268ef04d862493e0fc1cca5257f2a09346
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @chauhang @penguinwu @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov
| true
|
2,860,506,526
|
[Triton upstream] [Inductor] [ROCm] LLVM failure in some gemm kernels
|
jataylo
|
closed
|
[
"module: rocm",
"oncall: pt2",
"upstream triton"
] | 2
|
COLLABORATOR
|
### 🐛 Describe the bug
Platform: ROCm
Triton commit: f73cf3268ef04d862493e0fc1cca5257f2a09346
As seen in https://github.com/pytorch/pytorch/pull/147320 when attempting to bump triton in preparation for 3.3, the latest tip of tree of triton breaks some gemm UTs due to an LLVM error.
Reproducer:
https://gist.github.com/jataylo/898b8fd6bd5b6f213e1dd93d4e9918b7
Unit tests:
> TORCHINDUCTOR_COMPILE_THREADS=1 TORCHINDUCTOR_BENCHMARK_KERNEL=1 TORCH_COMPILE_DEBUG=1 TORCH_LOGS="+all" python test_max_autotune.py TestPrologueFusion.test_broadcast_x_K_64
Traceback:
```
python: /root/.triton/llvm/llvm-1188b1ff-ubuntu-x64/include/llvm/Support/Casting.h:566: decltype(auto) llvm::cast(const From&) [with To = mlir::RankedTensorType; From = mlir::Type]: Assertion `isa<To>(Val) && "cast<Ty>() argument of incompatible type!"' failed.
#blocked = #ttg.blocked<{sizePerThread = [1, 1], threadsPerWarp = [1, 64], warpsPerCTA = [2, 1], order = [1, 0]}>
#blocked1 = #ttg.blocked<{sizePerThread = [1, 1], threadsPerWarp = [64, 1], warpsPerCTA = [2, 1], order = [1, 0]}>
#blocked2 = #ttg.blocked<{sizePerThread = [1, 1], threadsPerWarp = [2, 32], warpsPerCTA = [2, 1], order = [1, 0]}>
#blocked3 = #ttg.blocked<{sizePerThread = [1], threadsPerWarp = [64], warpsPerCTA = [2], order = [0]}>
#blocked4 = #ttg.blocked<{sizePerThread = [1, 1], threadsPerWarp = [1, 64], warpsPerCTA = [1, 2], order = [0, 1]}>
#blocked5 = #ttg.blocked<{sizePerThread = [1, 1], threadsPerWarp = [64, 1], warpsPerCTA = [2, 1], order = [0, 1]}>
#blocked6 = #ttg.blocked<{sizePerThread = [2, 2], threadsPerWarp = [4, 16], warpsPerCTA = [2, 1], order = [1, 0]}>
module attributes {"ttg.num-ctas" = 1 : i32, "ttg.num-warps" = 2 : i32, ttg.target = "hip:gfx942", "ttg.threads-per-warp" = 64 : i32} {
tt.func public @triton_tem_fused_add_mm_0(%arg0: !tt.ptr<f32> {tt.divisibility = 16 : i32, tt.pointer_range = 32 : i32}, %arg1: !tt.ptr<f32> {tt.divisibility = 16 : i32, tt.pointer_range = 32 : i32}, %arg2: !tt.ptr<f32> {tt.divisibility = 16 : i32, tt.pointer_range = 32 : i32}, %arg3: i32 {tt.divisibility = 16 : i32}, %arg4: i32 {tt.divisibility = 16 : i32}) attributes {noinline = false} {
%cst = arith.constant dense<0.000000e+00> : tensor<1x64xf32, #blocked>
%cst_0 = arith.constant 1.000000e+00 : f32
%c63_i32 = arith.constant 63 : i32
%c31_i32 = arith.constant 31 : i32
%c1_i32 = arith.constant 1 : i32
%cst_1 = arith.constant dense<1> : tensor<16x1xi32, #blocked1>
%cst_2 = arith.constant dense<0.000000e+00> : tensor<16x32xf32, #blocked2>
%cst_3 = arith.constant dense<0.000000e+00> : tensor<64x32xf32, #blocked2>
%c64_i32 = arith.constant 64 : i32
%c8_i32 = arith.constant 8 : i32
%c32_i32 = arith.constant 32 : i32
%c16_i32 = arith.constant 16 : i32
%c0_i32 = arith.constant 0 : i32
%0 = arith.cmpi eq, %arg3, %c0_i32 : i32
cf.cond_br %0, ^bb1, ^bb2
^bb1: // pred: ^bb0
tt.return
^bb2: // pred: ^bb0
%1 = tt.get_program_id x : i32
%2 = arith.addi %arg3, %c31_i32 : i32
%3 = arith.divsi %2, %c32_i32 : i32
%4 = arith.muli %3, %c8_i32 : i32
%5 = arith.divsi %1, %4 : i32
%6 = arith.muli %5, %c8_i32 : i32
%7 = arith.subi %c1_i32, %6 : i32
%8 = arith.minsi %7, %c8_i32 : i32
%9 = arith.remsi %1, %8 : i32
%10 = arith.addi %6, %9 : i32
%11 = arith.remsi %1, %4 : i32
%12 = arith.divsi %11, %8 : i32
%13 = arith.muli %12, %c32_i32 : i32
%14 = tt.make_range {end = 32 : i32, start = 0 : i32} : tensor<32xi32, #blocked3>
%15 = tt.splat %13 : i32 -> tensor<32xi32, #blocked3>
%16 = arith.addi %15, %14 : tensor<32xi32, #blocked3>
%17 = tt.splat %arg3 : i32 -> tensor<32xi32, #blocked3>
%18 = arith.remsi %16, %17 {tt.contiguity = dense<32> : tensor<1xi32>, tt.divisibility = dense<32> : tensor<1xi32>} : tensor<32xi32, #blocked3>
%19 = tt.make_range {end = 64 : i32, start = 0 : i32} : tensor<64xi32, #blocked3>
%20 = arith.addi %arg4, %c63_i32 : i32
%21 = arith.divsi %20, %c64_i32 : i32
%22 = ttg.convert_layout %19 : tensor<64xi32, #blocked3> -> tensor<64xi32, #ttg.slice<{dim = 0, parent = #blocked4}>>
%23 = tt.expand_dims %22 {axis = 0 : i32} : tensor<64xi32, #ttg.slice<{dim = 0, parent = #blocked4}>> -> tensor<1x64xi32, #blocked4>
%24 = ttg.convert_layout %23 : tensor<1x64xi32, #blocked4> -> tensor<1x64xi32, #blocked>
%25 = ttg.convert_layout %19 : tensor<64xi32, #blocked3> -> tensor<64xi32, #ttg.slice<{dim = 1, parent = #blocked5}>>
%26 = tt.expand_dims %25 {axis = 1 : i32} : tensor<64xi32, #ttg.slice<{dim = 1, parent = #blocked5}>> -> tensor<64x1xi32, #blocked5>
%27 = ttg.convert_layout %26 : tensor<64x1xi32, #blocked5> -> tensor<64x1xi32, #blocked1>
%28 = ttg.convert_layout %18 : tensor<32xi32, #blocked3> -> tensor<32xi32, #ttg.slice<{dim = 0, parent = #blocked4}>>
%29 = tt.expand_dims %28 {axis = 0 : i32} : tensor<32xi32, #ttg.slice<{dim = 0, parent = #blocked4}>> -> tensor<1x32xi32, #blocked4>
%30 = ttg.convert_layout %29 : tensor<1x32xi32, #blocked4> -> tensor<1x32xi32, #blocked2>
%31 = tt.splat %arg3 : i32 -> tensor<64x1xi32, #blocked1>
%32 = tt.broadcast %30 : tensor<1x32xi32, #blocked2> -> tensor<64x32xi32, #blocked2>
%33 = tt.splat %arg0 : !tt.ptr<f32> -> tensor<64x32x!tt.ptr<f32>, #blocked2>
%34 = scf.for %arg5 = %c0_i32 to %21 step %c1_i32 iter_args(%arg6 = %cst_2) -> (tensor<16x32xf32, #blocked2>) : i32 {
%55 = arith.muli %arg5, %c64_i32 : i32
%56 = arith.subi %arg4, %55 : i32
%57 = tt.splat %56 : i32 -> tensor<1x64xi32, #blocked>
%58 = arith.cmpi slt, %24, %57 : tensor<1x64xi32, #blocked>
%59 = tt.splat %56 : i32 -> tensor<64x1xi32, #blocked1>
%60 = arith.cmpi slt, %27, %59 : tensor<64x1xi32, #blocked1>
%61 = tt.splat %55 : i32 -> tensor<64x1xi32, #blocked1>
%62 = arith.addi %27, %61 : tensor<64x1xi32, #blocked1>
%63 = tt.load %arg1 : !tt.ptr<f32>
%64 = arith.addf %63, %cst_0 : f32
%65 = tt.splat %64 : f32 -> tensor<1x64xf32, #blocked>
%66 = arith.select %58, %65, %cst : tensor<1x64xi1, #blocked>, tensor<1x64xf32, #blocked>
%67 = tt.broadcast %66 : tensor<1x64xf32, #blocked> -> tensor<16x64xf32, #blocked>
%68 = arith.muli %62, %31 : tensor<64x1xi32, #blocked1>
%69 = tt.broadcast %68 : tensor<64x1xi32, #blocked1> -> tensor<64x32xi32, #blocked1>
%70 = ttg.convert_layout %69 : tensor<64x32xi32, #blocked1> -> tensor<64x32xi32, #blocked2>
%71 = arith.addi %32, %70 : tensor<64x32xi32, #blocked2>
%72 = tt.addptr %33, %71 : tensor<64x32x!tt.ptr<f32>, #blocked2>, tensor<64x32xi32, #blocked2>
%73 = tt.broadcast %60 : tensor<64x1xi1, #blocked1> -> tensor<64x32xi1, #blocked1>
%74 = ttg.convert_layout %73 : tensor<64x32xi1, #blocked1> -> tensor<64x32xi1, #blocked2>
%75 = tt.load %72, %74, %cst_3 : tensor<64x32x!tt.ptr<f32>, #blocked2>
%76 = ttg.convert_layout %67 : tensor<16x64xf32, #blocked> -> tensor<16x64xf32, #ttg.dot_op<{opIdx = 0, parent = #blocked6}>>
%77 = ttg.convert_layout %75 : tensor<64x32xf32, #blocked2> -> tensor<64x32xf32, #ttg.dot_op<{opIdx = 1, parent = #blocked6}>>
%78 = ttg.convert_layout %arg6 : tensor<16x32xf32, #blocked2> -> tensor<16x32xf32, #blocked6>
%79 = tt.dot %76, %77, %78 : tensor<16x64xf32, #ttg.dot_op<{opIdx = 0, parent = #blocked6}>> * tensor<64x32xf32, #ttg.dot_op<{opIdx = 1, parent = #blocked6}>> -> tensor<16x32xf32, #blocked6>
%80 = ttg.convert_layout %79 : tensor<16x32xf32, #blocked6> -> tensor<16x32xf32, #blocked2>
scf.yield %80 : tensor<16x32xf32, #blocked2>
}
%35 = arith.muli %10, %c16_i32 : i32
%36 = tt.make_range {end = 16 : i32, start = 0 : i32} : tensor<16xi32, #blocked3>
%37 = tt.splat %35 : i32 -> tensor<16xi32, #blocked3>
%38 = arith.addi %37, %36 : tensor<16xi32, #blocked3>
%39 = ttg.convert_layout %38 : tensor<16xi32, #blocked3> -> tensor<16xi32, #ttg.slice<{dim = 1, parent = #blocked5}>>
%40 = tt.expand_dims %39 {axis = 1 : i32} : tensor<16xi32, #ttg.slice<{dim = 1, parent = #blocked5}>> -> tensor<16x1xi32, #blocked5>
%41 = ttg.convert_layout %40 : tensor<16x1xi32, #blocked5> -> tensor<16x1xi32, #blocked1>
%42 = ttg.convert_layout %16 : tensor<32xi32, #blocked3> -> tensor<32xi32, #ttg.slice<{dim = 0, parent = #blocked4}>>
%43 = tt.expand_dims %42 {axis = 0 : i32} : tensor<32xi32, #ttg.slice<{dim = 0, parent = #blocked4}>> -> tensor<1x32xi32, #blocked4>
%44 = ttg.convert_layout %43 : tensor<1x32xi32, #blocked4> -> tensor<1x32xi32, #blocked2>
%45 = arith.cmpi slt, %41, %cst_1 : tensor<16x1xi32, #blocked1>
%46 = tt.splat %arg3 : i32 -> tensor<1x32xi32, #blocked2>
%47 = arith.cmpi slt, %44, %46 : tensor<1x32xi32, #blocked2>
%48 = tt.broadcast %45 : tensor<16x1xi1, #blocked1> -> tensor<16x32xi1, #blocked1>
%49 = ttg.convert_layout %48 : tensor<16x32xi1, #blocked1> -> tensor<16x32xi1, #blocked2>
%50 = tt.broadcast %47 : tensor<1x32xi1, #blocked2> -> tensor<16x32xi1, #blocked2>
%51 = arith.andi %49, %50 : tensor<16x32xi1, #blocked2>
%52 = tt.splat %arg2 : !tt.ptr<f32> -> tensor<1x32x!tt.ptr<f32>, #blocked2>
%53 = tt.addptr %52, %44 : tensor<1x32x!tt.ptr<f32>, #blocked2>, tensor<1x32xi32, #blocked2>
%54 = tt.broadcast %53 : tensor<1x32x!tt.ptr<f32>, #blocked2> -> tensor<16x32x!tt.ptr<f32>, #blocked2>
tt.store %54, %34, %51 : tensor<16x32x!tt.ptr<f32>, #blocked2>
tt.return
}
}
{-#
external_resources: {
mlir_reproducer: {
pipeline: "builtin.module(tritongpu-coalesce, tritongpu-remove-layout-conversions, tritongpu-optimize-thread-locality, tritonamdgpu-accelerate-matmul{arch-generation-name=gfx942 kPack=1 matrix-instruction-size=0}, tritongpu-remove-layout-conversions, tritonamdgpu-optimize-epilogue, tritongpu-optimize-dot-operands{hoist-layout-conversion=true}, tritonamdgpu-stream-pipeline{global_prefetch=0 local_prefetch=0 num_stages=2}, canonicalize{ max-iterations=10 max-num-rewrites=-1 region-simplify=normal test-convergence=false top-down=true}, tritongpu-optimize-dot-operands{hoist-layout-conversion=true}, tritongpu-remove-layout-conversions, tritongpu-reduce-data-duplication, tritonamdgpu-reorder-instructions, canonicalize{ max-iterations=10 max-num-rewrites=-1 region-simplify=normal test-convergence=false top-down=true}, cse, symbol-dce)",
disable_threading: false,
verify_each: true
}
}
#-}
/root/repro_new.py:6:0: error: Failures have been detected while processing an MLIR pass pipeline
/root/repro_new.py:6:0: note: Pipeline failed while executing [`TritonAMDGPUReorderInstructions` on 'builtin.module' operation]: reproducer generated at `std::errs, please share the reproducer above with Triton project.`
Traceback (most recent call last):
File "/root/repro_new.py", line 112, in <module>
main()
File "/root/repro_new.py", line 103, in main
triton_tem_fused_add_mm_0[grid](
File "/root/triton/python/triton/runtime/jit.py", line 336, in <lambda>
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
File "/root/triton/python/triton/runtime/jit.py", line 563, in run
kernel = self.compile(src, target=target, options=options.__dict__)
File "/root/triton/python/triton/compiler/compiler.py", line 283, in compile
next_module = compile_ir(module, metadata)
File "/root/triton/python/triton/backends/amd/compiler.py", line 389, in <lambda>
stages["ttgir"] = lambda src, metadata: self.make_ttgir(src, metadata, options)
File "/root/triton/python/triton/backends/amd/compiler.py", line 244, in make_ttgir
pm.run(mod)
RuntimeError: PassManager::run failed
```
### Versions
Triton commit: f73cf3268ef04d862493e0fc1cca5257f2a09346
PyTorch: nightly
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @hongxiayang @naromero77amd @chauhang @penguinwu @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov
| true
|
2,860,422,484
|
Nightly Windows builds started to time out around Jan 31, 2025
|
jeanschmidt
|
closed
|
[
"module: build",
"module: cuda",
"triaged"
] | 10
|
CONTRIBUTOR
|
### 🐛 Describe the bug
Multiple days-in-a-row nightly binary builds for windows are broken.
https://hud.pytorch.org/hud/pytorch/pytorch/nightly
### Versions
nightly
cc @malfet @seemethere @ptrblck @msaroufim @eqy
| true
|
2,860,281,796
|
[Triton upstream] [Inductor] Widespread failures in UTs: AttributeError: 'dict' object has no attribute 'equal_to_1'
|
jataylo
|
closed
|
[
"triaged",
"oncall: pt2",
"upstream triton",
"oncall: export",
"module: aotinductor"
] | 8
|
COLLABORATOR
|
### 🐛 Describe the bug
Platform: NV and ROCm
Triton commit: f73cf3268ef04d862493e0fc1cca5257f2a09346
As seen in https://github.com/pytorch/pytorch/pull/147320 when attempting to bump triton in preparation for 3.3, the latest tip of tree of triton breaks many UTs due to some apparent API deprecation.
Reproducer:
python test/inductor/test_torchinductor.py -k "test_sdpa_inference_mode_aot_compile" --verbose
Traceback:
```
======================================================================
ERROR: test_sdpa_inference_mode_aot_compile (__main__.TritonCodeGenTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/test/inductor/test_torchinductor.py", line 12833, in test_sdpa_inference_mode_aot_compile
torch._inductor.aot_compile(traced, inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/__init__.py", line 265, in aot_compile
return compile_fx_aot(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1639, in compile_fx_aot
compiled_artifacts = compile_fx(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1828, in compile_fx
return compile_fx(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1871, in compile_fx
return compile_fx(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2155, in compile_fx
return inference_compiler(unlifted_gm, example_inputs_)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 479, in __call__
return self.compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 2038, in fw_compiler_base
return inner_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 623, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 104, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 727, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1402, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1093, in codegen_and_compile
code, linemap = graph.codegen_with_cpp_wrapper()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1795, in codegen_with_cpp_wrapper
return self.codegen()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1905, in codegen
self.scheduler.codegen()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 3885, in codegen
return self._codegen()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 3966, in _codegen
self.get_backend(device).codegen_node(node)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/cuda_combined_scheduling.py", line 104, in codegen_node
return self._triton_scheduling.codegen_node(node)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/simd.py", line 1323, in codegen_node
return self.codegen_node_schedule(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/simd.py", line 1387, in codegen_node_schedule
final_kernel.call_kernel(final_kernel.kernel_name)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/triton.py", line 3647, in call_kernel
wrapper.generate_kernel_call(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/cpp_wrapper_gpu.py", line 562, in generate_kernel_call
equal_to_1 = triton_meta["configs"][0].equal_to_1
AttributeError: 'dict' object has no attribute 'equal_to_1'
To execute this test, run the following from the base repo dir:
python test/inductor/test_torchinductor.py TritonCodeGenTests.test_sdpa_inference_mode_aot_compile
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
### Versions
Triton commit: f73cf3268ef04d862493e0fc1cca5257f2a09346
PyTorch: nightly
cc @chauhang @penguinwu @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi
| true
|
2,860,259,448
|
[ONNX] How to export triton custom kernels as custom ops?
|
zzq96
|
closed
|
[
"module: onnx",
"triaged"
] | 10
|
NONE
|
### 🐛 Describe the bug
can't export triton cumstom op kernel when use torch.onnx.export(dynamo=True)
i have use triton_op and wrap_triton to wrap this triton kernel
```python
import torch
from torch.library import triton_op, wrap_triton
import triton
from triton import language as tl
@triton.jit
def add_kernel(
in_ptr0,
in_ptr1,
out_ptr,
n_elements,
BLOCK_SIZE: "tl.constexpr",
):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(in_ptr0 + offsets, mask=mask)
y = tl.load(in_ptr1 + offsets, mask=mask)
output = x + y
tl.store(out_ptr + offsets, output, mask=mask)
@triton_op("mylib::add", mutates_args={})
def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
output = torch.empty_like(x)
n_elements = output.numel()
def grid(meta):
return (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),)
# NB: we need to wrap the triton kernel in a call to wrap_triton
wrap_triton(add_kernel)[grid](x, y, output, n_elements, 16)
return output
@torch.compile
def f(x, y):
return add(x, y)
x = torch.randn(3, device="cuda")
y = torch.randn(3, device="cuda")
z = f(x, y)
assert torch.allclose(z, x + y)
with torch.no_grad():
torch.onnx.export(f,
(x,y,),
"triton_export.onnx",
export_params=True,
dynamo=True,
opset_version=18,
do_constant_folding=False,
optimize=False,
#custom_translation_table=custom_translation_table,
input_names=["zzq_a","zzq_b"],
output_names=["zzq_out"],
verbose=True)
```
error msg:
```
torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export(..., strict=False)`... ❌
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export`...
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with `torch.export.export`... ❌
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with Torch Script...
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with Torch Script... ❌
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with internal Dynamo apis...
[torch.onnx] Obtain model graph for `<function f at 0x7f646a1b2670>` with internal Dynamo apis... ✅
[torch.onnx] Run decomposition...
[torch.onnx] Run decomposition... ✅
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... ❌
Traceback (most recent call last):
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 708, in _translate_fx_graph
_handle_call_function_node_with_lowering(
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 490, in _handle_call_function_node_with_lowering
raise _errors.DispatchError(
torch.onnx._internal.exporter._errors.DispatchError: No ONNX function found for <torch._higher_order_ops.triton_kernel_wrap.TritonKernelWrapperFunctional object at 0x7f63c5fa01c0>. Failure message: No decompositions registered for the real-valued input
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 1372, in export
onnx_program = _exported_program_to_onnx_program(
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 1008, in _exported_program_to_onnx_program
values = _translate_fx_graph(
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 734, in _translate_fx_graph
raise _errors.ConversionError(
torch.onnx._internal.exporter._errors.ConversionError: Error when translating node %triton_kernel_wrapper_functional_proxy : [num_users=1] = call_function[target=torch.ops.higher_order.triton_kernel_wrapper_functional](args = (), kwargs = {kernel_idx: 0, constant_args_idx: 10, grid: [(1, 1, 1)], tma_descriptor_metadata: {}, kwargs: {in_ptr0: %arg0, in_ptr1: %arg1, out_ptr: %empty_like, n_elements: 3, BLOCK_SIZE: 16}, tensors_to_clone: [out_ptr]}). See the stack trace for more information.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/app/torch_ddp/triton_export.py", line 38, in <module>
torch.onnx.export(f,
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/__init__.py", line 351, in export
return _compat.export_compat(
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_compat.py", line 304, in export_compat
onnx_program = _core.export(
File "/usr/bin/python3.9/lib/python3.9/site-packages/torch/onnx/_internal/exporter/_core.py", line 1416, in export
raise _errors.ConversionError(
torch.onnx._internal.exporter._errors.ConversionError: Failed to convert the exported program to an ONNX model. This is step 3/3 of exporting the model to ONNX. Next steps:
- If there is a missing ONNX function, implement it and register it to the registry.
- If there is an internal error during ONNX conversion, debug the error and summit a PR to PyTorch.
- Create an error report with `torch.onnx.export(..., report=True)`, and save the ExportedProgram as a pt2 file. Create an issue in the PyTorch GitHub repository against the *onnx* component. Attach the error report and the pt2 model.
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+git1eba9b3
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Linux 3.2 (Final) (x86_64)
GCC version: (GCC) 10.3.1 20210422 (Red Hat 10.3.1-1)
Clang version: 9.0.1 (Red Hat 9.0.1-2.module_el8.2.0+309+0c7b6b03)
CMake version: version 3.19.0
Libc version: glibc-2.28
Python version: 3.9.16 (main, Dec 11 2024, 20:47:20) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] (64-bit runtime)
Python platform: Linux-5.4.119-1-tlinux4-0010.3-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10
GPU 1: NVIDIA A10
GPU 2: NVIDIA A10
GPU 3: NVIDIA A10
Nvidia driver version: 470.141.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.9.7
/usr/lib/libcudnn_adv_infer.so.8.9.7
/usr/lib/libcudnn_adv_train.so.8.9.7
/usr/lib/libcudnn_cnn_infer.so.8.9.7
/usr/lib/libcudnn_cnn_train.so.8.9.7
/usr/lib/libcudnn_ops_infer.so.8.9.7
/usr/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7K83 64-Core Processor
Stepping: 0
CPU MHz: 2545.218
BogoMIPS: 5090.43
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-111
NUMA node1 CPU(s): 112-223
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 erms rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 arat
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0
[pip3] tf2onnx==1.9.3
[pip3] torch==2.6.0a0+git1eba9b3
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,860,243,002
|
[Triton upstream] [Inductor] Flex attention failures `IndexError('list index out of range')` in Triton Compilation
|
jataylo
|
closed
|
[
"oncall: pt2",
"upstream triton"
] | 1
|
COLLABORATOR
|
### 🐛 Describe the bug
Platform: NV and ROCm
Triton commit: f73cf3268ef04d862493e0fc1cca5257f2a09346
As seen in https://github.com/pytorch/pytorch/pull/147320 when attempting to bump triton in preparation for 3.3, the latest tip of tree of triton breaks many flex_attention and flex_decode tests.
Repro:
`python test/inductor/test_flex_attention.py -k "test_small_q_kv_len" --verbose`
Traceback:
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/tmp/pytorch/test/inductor/test_flex_attention.py", line 3202, in test_small_q_kv_len
out_compiled, lse_compiled = flex_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 752, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 737, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1402, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1122, in codegen_and_compile
compiled_fn = graph.compile_to_module().call
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1986, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2028, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2757, in load_by_key_path
mod = _reload_python_module(key, path)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 51, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpqofkwbp6/33/c33tjgm26qfhojp4xtk5fdrlgflp2s335u4zukrvkx37tyhhjrl3.py", line 237, in <module>
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 254, in triton
kernel.precompile()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 265, in precompile
self._precompile_worker()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 280, in _precompile_worker
compile_results.append(self._precompile_config(c))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 513, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
File "/root/triton/python/triton/compiler/compiler.py", line 277, in compile
module = src.make_ir(options, codegen_fns, module_map, context)
File "/root/triton/python/triton/compiler/compiler.py", line 81, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,
torch._inductor.exc.InductorError: CompilationError: at 127:4:
tl.device_assert(BLOCK_M % G == 0)
BLOCK_M_PER_HQ: tl.constexpr = BLOCK_M // G
off_g = tl.arange(0, G) # [G]
offs_g = tl.ravel(tl.broadcast_to(off_g[:, None], [G, BLOCK_M_PER_HQ])) # [BLOCK_M]
offs_hq = offs_g + off_hkv * G
off_m = tl.arange(0, BLOCK_M_PER_HQ) # [BLOCK_M_PER_HQ]
offs_m = tl.ravel(tl.broadcast_to(off_m[None, :], [G, BLOCK_M_PER_HQ])) # [BLOCK_M]
offs_d = tl.arange(0, QK_HEAD_DIM_ROUNDED)
offs_vd = tl.arange(0, V_HEAD_DIM_ROUNDED)
# Get HZ offsets for KV_NUM_BLKS and KV_IDX
stride_block_z, stride_block_h, stride_block_row, stride_block_col = 1, 1, 1
^
IndexError('list index out of range')
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test/inductor/test_flex_attention.py TestFlexAttention.test_small_q_kv_len
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
### Versions
Triton commit: f73cf3268ef04d862493e0fc1cca5257f2a09346
PyTorch: nightly
cc @chauhang @penguinwu @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov
| true
|
2,860,177,421
|
Investigate #75462
|
rec
|
closed
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147372
* #146894
| true
|
2,860,009,125
|
Add HPU support to test_structured_sparsifier.py
|
amathewc
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
# MOTIVATION
We recently integrated support for Intel Gaudi devices (identified as 'hpu') into the common_device_type framework via the pull request at https://github.com/pytorch/pytorch/pull/126970. This integration allows tests to be automatically instantiated for Gaudi devices upon loading the relevant library. Building on this development, the current pull request extends the utility of these hooks by adapting tests from test_structured_sparsifier.py to operate on Gaudi devices. Additionally, we have confirmed that these modifications do not interfere with the existing tests on CUDA devices.
Other accelerators can also extend the functionality by adding the device in the devices set. ( For eg: xpu )
Please note that the previous PR (https://github.com/pytorch/pytorch/pull/147370) was deleted due to CLA issues.
# CHANGES
Use TEST_CUDA and TEST_HPU flags to set the device available in the test environment
@ankurneog
| true
|
2,859,975,795
|
Add HPU support to test_structured_sparsifier.py
|
amathewc
|
closed
|
[
"open source",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
## MOTIVATION
We recently integrated support for Intel Gaudi devices (identified as 'hpu') into the common_device_type framework via the pull request at https://github.com/pytorch/pytorch/pull/126970. This integration allows tests to be automatically instantiated for Gaudi devices upon loading the relevant library. Building on this development, the current pull request extends the utility of these hooks by adapting tests from test_structured_sparsifier.py to operate on Gaudi devices. Additionally, we have confirmed that these modifications do not interfere with the existing tests on CUDA devices.
Other accelerators can also extend the functionality by adding the device in the devices set. ( For eg: xpu )
## CHANGES
- Use TEST_CUDA and TEST_HPU flags to set the device available in the test environment
@ankurneog
| true
|
2,859,890,982
|
[MPS] Implemented `masked_fill_scalar` as shader
|
malfet
|
closed
|
[
"Merged",
"topic: bug fixes",
"release notes: mps",
"ciflow/mps"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147369
- Move `pos_from_thread_index and `offset_from_pos` from `UnfoldBackward.metal` into `c10/metal/indexing.h` header
- Initial idea were to implement `StridedTensor` and `ConstStridedTensor` and use them to have masked_fill kernel a something simple as the following loop
```metal
ConstStridedTensor<bool> mask(mask_data, sizes, mask_strides, ndim);
if (mask[thread_index]) {
StridedTensor<T> input(input_data, sizes, input_strides, ndim);
input[thread_index] = val;
}
```
But though it looks elegant and works correctly, performance wise it's much slower that the existing MPS shader (see table below), as int64 divisions on M2 GPU are really slow
- Solved performance issue by implementing 3 flavors of the same shader: `dense`, that is used when both input and mask are dense tensors of the same size, `broadcast`, which is used when `mask` is leading dimensions expandable into input tensor and `strided` which is a general purpose fallback, but still computes position in the tensors only ones. As result, perf is even better than existing MPS shader for dense and broadcast able tensors.
Performance measured on M2Pro thru different iterations of the same shader
| dtype | MPS | int64-idx | int64-inlined | 32-bit strided | 32-bit broadcasted |
| ------|------| -----| ---- | --- | ---- |
| float32 | 2.8 msec | 41.6 msec | 26.9 msec | 5 msec | 2.4 msec |
| float16 | 1.86 msec | 38.2 msec| 26.6 msec | 4.6 msec | 1.9 msec |
|bfloat16|1.86 msec |38.3 msec | 26.6 msec | 4.6 msec | 1.9 msec |
And benchmark script
```python
import torch
from timeit import default_timer
from itertools import product
from torch.utils.benchmark import Measurement, Timer
def bench_mask_fill(
n,
binary_func,
dtype=torch.float32,
) -> Measurement:
t = Timer(
stmt=f"x.masked_fill(y, -17.0); torch.mps.synchronize()",
setup=f"x,y = torch.rand(1, 20, {n}, {n}, dtype={dtype}, device='mps'), torch.ones({n}, {n}, device='mps').triu().bool()",
globals = {'f': binary_func},
language="python", timer=default_timer
)
return t.blocked_autorange()
if __name__ == "__main__":
n = 1024
for dtype in [torch.float32, torch.float16, torch.bfloat16]:
eager_t = bench_mask_fill(n, torch.fmax, dtype)
use_msec = eager_t.mean > 1e-4
multiplier = 1e3 if use_msec else 1e6
uname = "msec" if use_msec else "usec"
print(f"torch.masked_fill_() {str(dtype):>14} {eager_t.mean*multiplier:>7.2f} {uname}")
```
Fixes https://github.com/pytorch/pytorch/issues/143477
| true
|
2,859,849,318
|
[Inductor][CPP] Add float16 support for CppMicroGemmAMX
|
CaoE
|
open
|
[
"triaged",
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Add float16 support for CppMicroGemmAMX to get better performance for float16 gemm template. Float16 CppMicroGemmAMX needs a higher version of compiler, e.g., GCC 13.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,859,777,626
|
Force build to conform C++ standard on windows by adding `/permissive-` flag
|
Stonepia
|
closed
|
[
"oncall: distributed",
"oncall: jit",
"module: windows",
"module: cpu",
"module: mkldnn",
"open source",
"NNC",
"release notes: jit",
"module: inductor",
"module: dynamo",
"release notes: distributed (checkpoint)",
"module: compiled autograd",
"module: xpu"
] | 26
|
CONTRIBUTOR
|
Fixes #147366
1. Add `/permissive-` to the `torch_compile_options` for the build to conform to the C++ standard.
2. Fix the error when trying to assign a string literal to a non-const ptr.
The `/permissive-` flag can be found at https://learn.microsoft.com/en-us/cpp/build/reference/permissive-standards-conformance?view=msvc-170
From the above [doc](https://learn.microsoft.com/en-us/cpp/build/reference/permissive-standards-conformance?view=msvc-170#remarks),
> By default, the /permissive- option is set in new projects created by Visual Studio 2017 version 15.5 and later versions.
> The /permissive- option is implicitly set by the /std:c++latest option starting in Visual Studio 2019 version 16.8, and in version 16.11 by the /std:c++20 option.
Thus, it is reasonable to add this flag to the existing project.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @mingfeima @XiaobingSuper @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @voznesenskym @penguinwu @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @xmfan @fengyuan14 @guangyey @xuhancn
| true
|
2,859,772,525
|
[XPU] [Win] Build error when upgrade oneAPI
|
Stonepia
|
closed
|
[
"triaged",
"module: xpu"
] | 2
|
CONTRIBUTOR
|
### 🐛 Describe the bug
When trying to upgrade oneAPI to a new internal build, we get the following error:
```
[5476/7907] Building CXX object c10\xpu\test\CMakeFiles\c10_xpu_XPUStreamTest.dir\impl\XPUStreamTest.cpp.obj
FAILED: c10/xpu/test/CMakeFiles/c10_xpu_XPUStreamTest.dir/impl/XPUStreamTest.cpp.obj
C:\PROGRA~1\MICROS~3\2022\COMMUN~1\VC\Tools\MSVC\1442~1.344\bin\Hostx64\x64\cl.exe /nologo /TP -DEXPORT_AOTI_FUNCTIONS -DFLASHATTENTION_DISABLE_ALIBI -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNOMINMAX -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -DUSE_MIMALLOC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_UCRT_LEGACY_INFINITY -IC:\pytorch\pytorch\build\aten\src -IC:\pytorch\pytorch\aten\src -IC:\pytorch\pytorch\build -IC:\pytorch\pytorch -IC:\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\pytorch\pytorch\third_party\onnx -IC:\pytorch\pytorch\build\third_party\onnx -IC:\pytorch\pytorch\nlohmann -IC:\pytorch\pytorch\third_party\mimalloc\include -IC:\pytorch\pytorch\c10\xpu\..\.. -IC:\pytorch\pytorch\c10\.. -external:IC:\pytorch\pytorch\build\third_party\gloo -external:IC:\pytorch\pytorch\cmake\..\third_party\gloo -external:IC:\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -external:IC:\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -external:IC:\pytorch\pytorch\third_party\protobuf\src -external:I"C:\Program Files (x86)\Intel\oneAPI\mkl\latest\include" -external:IC:\pytorch\pytorch\third_party\XNNPACK\include -external:IC:\pytorch\pytorch\third_party\ittapi\include -external:IC:\pytorch\pytorch\cmake\..\third_party\eigen -external:I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\include" -external:I"C:\Program Files (x86)\Intel\oneAPI\dnnl\latest\include" -external:IC:\pytorch\pytorch\third_party\ideep\include -external:IC:\pytorch\pytorch\INTERFACE -external:IC:\pytorch\pytorch\third_party\nlohmann\include -external:I"C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include" -external:I"C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl" -external:IC:\pytorch\pytorch\third_party\googletest\googletest\include -external:IC:\pytorch\pytorch\third_party\googletest\googletest -external:W0 /DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 -DUSE_XPU /O2 /Ob2 /DNDEBUG /bigobj -DNDEBUG -std:c++17 -MD -DSYCL_COMPILER_VERSION=20250100 -DMKL_HAS_SBGEMM -DMKL_HAS_SHGEMM -DCAFFE2_USE_GLOO /showIncludes /Foc10\xpu\test\CMakeFiles\c10_xpu_XPUStreamTest.dir\impl\XPUStreamTest.cpp.obj /Fdc10\xpu\test\CMakeFiles\c10_xpu_XPUStreamTest.dir\ /FS -c C:\pytorch\pytorch\c10\xpu\test\impl\XPUStreamTest.cpp
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(236): error C2065: 'CtorArgTy': undeclared identifier
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(236): note: the template instantiation context (the oldest one first) is
C:\pytorch\pytorch\c10\xpu\test\impl\XPUStreamTest.cpp(34): note: see reference to function template instantiation 'testing::AssertionResult testing::internal::EqHelper::Compare<sycl::_V1::queue,sycl::_V1::queue,0x0>(const char *,const char *,const T1 &,const T2 &)' being compiled
...
C:\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include\gtest/gtest-printers.h(333): note: while compiling class template member function 'unknown-type testing::internal::internal_stream_operator_without_lexical_name_lookup::StreamPrinter::PrintValue(const T &,std::ostream *)'
C:\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include\gtest/gtest-printers.h(245): note: see reference to function template instantiation 'sycl::_V1::vec<sycl::_V1::cl_uint,4> sycl::_V1::detail::operator <<(const sycl::_V1::vec<sycl::_V1::cl_uint,4> &,const sycl::_V1::vec<sycl::_V1::cl_uint,4> &)' being compiled
C:\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include\gtest/gtest-printers.h(245): note: while compiling class template member function 'sycl::_V1::vec<sycl::_V1::cl_uint,4>::vec(const argTN ...)'
C:\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include\gtest/gtest-printers.h(245): note: while processing the default template argument of 'sycl::_V1::vec<sycl::_V1::cl_uint,4>::vec(const argTN ...)'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(283): note: see reference to variable template 'const bool sycl::_V1::vec<unsigned int,4>::AllowArgTypeInVariadicCtor<std::basic_ostream<char,std::char_traits<char> > >' being compiled
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(236): error C2923: 'sycl::_V1::detail::is_vec_or_swizzle_v': 'CtorArgTy' is not a valid template type argument for parameter 'T'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(236): note: see declaration of 'CtorArgTy'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(236): error C2059: syntax error: ')'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(236): error C2143: syntax error: missing ';' before '{'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(237): error C2653: 'CtorArgTy': is not a class or namespace name
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(237): error C3861: 'size': identifier not found
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(238): error C2653: 'CtorArgTy': is not a class or namespace name
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(238): error C2146: syntax error: missing '>' before identifier 'element_type'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(238): error C2065: 'DataT': undeclared identifier
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(238): error C2062: type 'unknown-type' unexpected
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(240): error C2653: 'CtorArgTy': is not a class or namespace name
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(240): error C2146: syntax error: missing '>' before identifier 'element_type'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(240): error C2065: 'DataT': undeclared identifier
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(240): error C2062: type 'unknown-type' unexpected
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(241): error C2181: illegal else without matching if
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(242): error C2065: 'CtorArgTy': undeclared identifier
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(242): error C2065: 'DataT': undeclared identifier
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(242): error C2923: 'std::is_convertible_v': 'CtorArgTy' is not a valid template type argument for parameter '_From'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(242): note: see declaration of 'CtorArgTy'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(242): error C2923: 'std::is_convertible_v': 'DataT' is not a valid template type argument for parameter '_To'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(242): note: see declaration of 'DataT'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(242): error C2062: type 'unknown-type' unexpected
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(244): error C2440: 'initializing': cannot convert from 'void' to 'const bool'
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(244): note: Expressions of type void cannot be converted to other types
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(244): error C2131: expression did not evaluate to a constant
C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include\sycl/vector.hpp(244): note: a non-constant (sub-)expression was encountered
```
This is because the compiler does not comfort with the C++ standard.
To solve this, add the `/permissive-` flag, it will force the compiler with a new C++ standard.
Once add, one addition error occurs.
```
C:\pytorch\pytorch\torch\csrc\jit\codegen\fuser\cpu\fused_kernel.cpp(150): error C2440: '=': cannot convert from 'const wchar_t [28]' to 'wchar_t *'
C:\pytorch\pytorch\torch\csrc\jit\codegen\fuser\cpu\fused_kernel.cpp(150): note: Conversion from string literal loses const qualifier (see /Zc:strictStrings)
```
Those above could be solved by following change:
```diff
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 92bedacfef4..c3da39ea990 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
if(USE_XPU)
string(APPEND CMAKE_CXX_FLAGS " -DUSE_XPU")
+ if(WIN32)
+ string(APPEND CMAKE_CXX_FLAGS " /permissive-")
+ endif()
endif()
if(EMSCRIPTEN)
diff --git a/torch/csrc/jit/codegen/fuser/cpu/fused_kernel.cpp b/torch/csrc/jit/codegen/fuser/cpu/fused_kernel.cpp
index 09624309d16..6d9d450061f 100644
--- a/torch/csrc/jit/codegen/fuser/cpu/fused_kernel.cpp
+++ b/torch/csrc/jit/codegen/fuser/cpu/fused_kernel.cpp
@@ -145,7 +145,7 @@ void activate() {
intptr_t run(const std::string& cmd) {
// Getting the path of `cmd.exe`
- wchar_t* comspec = _wgetenv(L"COMSPEC");
+ const wchar_t* comspec = _wgetenv(L"COMSPEC");
if (!comspec) {
comspec = L"C:\\Windows\\System32\\cmd.exe";
}
```
### Versions
oneAPI: internal build 2025.1.
Visual Studio: VS2022
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
2,859,745,354
|
Replace `fw_metadata` info with trace log hint in hint message
|
zeshengzong
|
open
|
[
"triaged",
"open source",
"Stale",
"release notes: AO frontend"
] | 2
|
CONTRIBUTOR
|
Fixes #147135
## Test Result
```bash
RuntimeError: Found a graph input that requires gradients, and received a mutation.
This is currently banned in the aot_export workflow. If you need this functionality,
please file a github issue and submit the trace log.
Get trace log by running with `TORCH_TRACE`:
TORCH_TRACE="/tmp/tracedir" python foo.py
or following the insturction https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html#tlparse-torch-trace
```
cc @ezyang
| true
|
2,859,742,474
|
windows-binary-wheel nightly error
|
ozanMSFT
|
closed
|
[
"module: build",
"module: windows",
"triaged"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
Build is failing with the following error:
Build Step:
> CondaError: Downloaded bytes did not match Content-Length
Upload Step:
> No files were found with the provided path: C:\actions-runner\_work\_temp/artifacts. No artifacts will be uploaded.
Error is started with `wheel-py3_10-cpu-build` ; others are in progress.
This might be a temporary error, but it's worth following up on.
[GH job link](https://github.com/pytorch/pytorch/actions/runs/13385469478/job/37381277081)
[HUD commit link](https://hud.pytorch.org/pytorch/pytorch/commit/7604dd1102bd1c2bee07d60bc6a672c882c6dbd0)
### Versions
wheel-py3_10-cpu-build
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
| true
|
2,859,701,724
|
[ROCm][TunableOp] resolve the rocBLAS version dynamically
|
apakbin
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"rocm",
"ciflow/rocm"
] | 14
|
CONTRIBUTOR
|
Dynamically gets rocBLAS version instead of relying on some preprocessing-time definitions which may be stale.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,859,697,999
|
Floating point exception (core dumped) in torch.nn.functional.conv3d
|
qiqicliff
|
open
|
[
"module: crash",
"module: nn",
"triaged",
"module: mkldnn"
] | 4
|
NONE
|
### 🐛 Describe the bug
Under specific inputs, torch.nn.functional.conv3d triggered a crash.
### code
```
import torch
input_data = torch.randn(2, 3, 10, 10, 10)
weight = torch.randn(4, 3, 3, 3, 3)
bias = torch.randn(4)
output = torch.nn.functional.conv3d(input=input_data, weight=weight, bias=
bias, stride=36028797018963968, padding=1, dilation=1, groups=1)
```
### Output
```
Floating point exception (core dumped)
```
### Version
```
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.18 (main, Sep 11 2023, 13:21:18) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-106-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 1
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Virtualization: VT-x
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 6 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] torchviz==0.0.2
[pip3] triton==2.3.1
[conda] _tflow_select 2.3.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] blas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] cudatoolkit 11.7.1 h4bc3d14_13 conda-forge
[conda] mkl 2023.1.0 h213fc3f_46344 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl-service 2.4.0 py39h5eee18b_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_fft 1.3.11 py39h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] mkl_random 1.2.8 py39h1128e8f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy 1.26.4 py39h5f9d8c6_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] numpy-base 1.26.4 py39hb5e798b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.3.1 pypi_0 pypi
[conda] torchvision 0.18.1 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] triton 2.3.1 pypi_0 pypi
```
### Versions
cc
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,859,656,981
|
Torch with Gunicorn + Flask API performance issue on Docker
|
yothinsaengs
|
closed
|
[] | 1
|
NONE
|
I use Gunicorn as web server with flask api and I have performance issue compare with using Waitress as web server with flask
when I try to calculate matrix multiplication wth numpy there's no huge different in response time between Gunicorn and Waitress
Numpy API
---
```
@app.route('/numpy')
def _numpy():
matrix_a = np.random.rand(640, 640, 3)
count = 0
while count < 240:
matrix_a = (matrix_a**2) % 7
count += 1
return jsonify({"message": "Hello, World!"})
```
But when I calculate the same operation with torch (both enable and disable torch_no_grad)
Torch API
---
```
@app.route('/torch')
def _torch():
matrix_a = torch.rand(640, 640, 3) # Create a random tensor
count = 0
while count < 240:
matrix_a = (matrix_a ** 2) % 7 # Element-wise squaring and modulo
count += 1
return jsonify({"message": "Hello, World!"})
```
Torch_no_grad API
---
```
@app.route('/torch_no_grad')
def _torch_ng():
with torch.no_grad():
matrix_a = torch.rand(640, 640, 3) # Create a random tensor
count = 0
while count < 240:
matrix_a = (matrix_a ** 2) % 7 # Element-wise squaring and modulo
count += 1
return jsonify({"message": "Hello, World!"})
```
there is a huge difference in response time
```
limits:
memory: 1g
cpus: '8.0'
numpy
----------
waitress: Mean=1.1698s, Std=0.0300s
gunicorn: Mean=1.1715s, Std=0.0311s
torch
----------
waitress: Mean=0.9230s, Std=0.1078s
gunicorn: Mean=0.8869s, Std=0.1190s
torch_no_grad
----------
waitress: Mean=0.9172s, Std=0.1058s
gunicorn: Mean=0.8886s, Std=0.1126s
limits:
memory: 1g
cpus: '4.0'
numpy
----------
waitress: Mean=1.1876s, Std=0.0407s
gunicorn: Mean=1.1897s, Std=0.0390s
torch
----------
waitress: Mean=0.9502s, Std=0.1281s
gunicorn: Mean=0.9180s, Std=0.1288s
torch_no_grad
----------
waitress: Mean=0.9119s, Std=0.1063s
gunicorn: Mean=0.8678s, Std=0.1105s
limits:
memory: 1g
cpus: '2.0'
numpy
----------
waitress: Mean=1.1881s, Std=0.0494s
gunicorn: Mean=1.1835s, Std=0.0424s
torch
----------
waitress: Mean=0.7837s, Std=0.1328s
gunicorn: Mean=1.3097s, Std=0.0544s
torch_no_grad
----------
waitress: Mean=0.7932s, Std=0.0988s
gunicorn: Mean=1.3300s, Std=0.1083s
```
I evaluate this on
machine spec: Macbook Air m2 ram16
this is api that send request to Gunicorn and Waitress
```
import asyncio
import httpx
import time
from collections import defaultdict
import numpy as np
N = 1
url_paths = ["numpy", "torch", "torch_no_grad"]
API_URLS = [
"http://localhost:8001/",
"http://localhost:8002/",
]
API_URLS_DICT = {
"http://localhost:8001/": "waitress",
"http://localhost:8002/": "gunicorn",
}
async def fetch(client, url):
start_time = time.perf_counter() # Start timing
response = await client.get(url+url_path, timeout=20.0)
end_time = time.perf_counter() # End timing
response_time = end_time - start_time # Calculate response time
return {
"url": url,
"status": response.status_code,
"response_time": response_time,
"data": response.json()
}
async def main():
async with httpx.AsyncClient() as client:
tasks = [fetch(client, url) for url in API_URLS for _ in range(N)]
results = await asyncio.gather(*tasks)
return results
if __name__ == "__main__":
repeat_time = 5
for url_path in url_paths:
count = defaultdict(list)
print(url_path)
print('----------')
for _ in range(repeat_time):
y = asyncio.run(main())
for x in y:
count[API_URLS_DICT[x['url']]].append(x['response_time'])
for k, v in count.items():
v = np.array(v)
print(f"{k}: Mean={v.mean():.4f}s, Std={v.std():.4f}s")
print()
```
| true
|
2,859,647,922
|
[DO NOT MERGE][Inductor] Migrate from oneDNN Inner Product to oneDNN MatMul for mkldnn._linear_pointwise and mkldnn._linear_pointwise.binary
|
jiayisunx
|
open
|
[
"module: cpu",
"open source",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147855
* __->__ #147360
* #147073
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,859,637,505
|
[DO NOT MERGE] Update submodule ideep for ideep matmul changes
|
jiayisunx
|
open
|
[
"module: mkldnn",
"open source",
"topic: not user facing",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147855
* #147360
* __->__ #147359
* #147073
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
2,859,627,391
|
Update torch-xpu-ops commit pin
|
xytintel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"keep-going",
"ciflow/xpu"
] | 3
|
CONTRIBUTOR
|
Update the torch-xpu-ops commit to [a14d1eaa834a616705068103dc8129319087e864](https://github.com/intel/torch-xpu-ops/commit/a14d1eaa834a616705068103dc8129319087e864), includes:
- SparseCSR XPU support
- Refine build system
| true
|
2,859,574,488
|
handle default in _NamedOptimizer
|
samsja
|
open
|
[
"oncall: distributed",
"triaged",
"open source",
"Stale"
] | 7
|
NONE
|
This pr propagate the defaults field of the wrapper optimizer to the _NamedOptimizer.
This fixes a bug where torch.compile would fail when calling optimizer.zero_grad()
```bash
[rank1]: AttributeError: '_NamedOptimizer' object has no attribute 'defaults'
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/prime-rl/train.py", line 256, in <module>
[rank0]: train(config)
[rank0]: File "/root/prime-rl/train.py", line 199, in train
[rank0]: optimizer.zero_grad()
[rank0]: File "/root/prime-rl/.venv/lib/python3.10/site-packages/torch/_compile.py", line 32, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: File "/root/prime-rl/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/root/prime-rl/.venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 955, in zero_grad
[rank0]: foreach = self.defaults.get("foreach", False) or self.defaults.get(
[rank0]: AttributeError: '_NamedOptimizer' object has no attribute 'defaults'
```
PS: When can we get `_NamedOptimizer` as public API ?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,859,497,811
|
Validate inputs to _nested_view_from_buffer to prevent overflows
|
mikaylagawarecki
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147356
* #147354
* #147352
| true
|
2,859,478,145
|
Skip FP8 op for Intel GPU
|
daisyden
|
open
|
[
"open source",
"Stale",
"release notes: python_frontend",
"ciflow/xpu",
"release notes: xpu"
] | 4
|
NONE
|
Intel GPU backend does not have float8 support at present, to fulfil [RFC](https://github.com/pytorch/pytorch/issues/114850), this PR is to disable the float8 dtypes for torch.eye and torch._scaled_mm in op_db, so that the float8 test can be skipped on XPU.
| true
|
2,859,457,505
|
Make Tensor.set_ validate storage_offset when sizes/strides are unchanged
|
mikaylagawarecki
|
closed
|
[
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/slow",
"ci-no-td"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147356
* __->__ #147354
* #147352
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,859,224,986
|
Use float data type for Half sum in fallback implementation of batchnorm backward on CPU
|
CaoE
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Fixes #147303.
Use float data type for Half sum in fallback implementation of batchnorm backward on CPU as the representation range of Half is small.
| true
|
2,859,213,828
|
Fix overflow in checkInBoundsForStorage
|
mikaylagawarecki
|
closed
|
[
"Merged",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Use `computeStorageNbytes` (which checks for overflows) to include the computation re the storage_offset
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #147356
* #147354
* __->__ #147352
| true
|
2,859,210,859
|
[Inductor UT][XPU] Skip fft_c2c case since it's not implemented on XPU.
|
etaf
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"keep-going",
"ciflow/xpu"
] | 6
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147351
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,859,181,469
|
Add meta function for out variants of ones,zeros,empty
|
cz2h
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 18
|
CONTRIBUTOR
|
Fixes #135832
For aten.ones, aten.zeros, followed this [link](https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit?tab=t.0#heading=h.64r4npvq0w0) to register meta functions.
For aten.empty.out, followed this [part](https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit?tab=t.0#heading=h.iy9lxhxhtl5v) to register a decomp for empty that handles the FakeTensor input.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,859,168,064
|
Refine XPU oneDNN context manager API
|
guangyey
|
closed
|
[
"module: cpu",
"open source",
"Merged",
"ciflow/trunk",
"topic: improvements",
"ciflow/xpu",
"release notes: xpu"
] | 22
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147349
# Motivation
This PR introduces improvements to the XPU oneDNN context manager API:
- `GpuEngineManager::get_engine`: Added a new API that accepts a `DeviceIndex` to simplify code and improve usability - by default, using the current device index.
- `GpuStreamManager::get_stream`: Now explicitly requires a `DeviceIndex` as input to ensure correctness and consistency - by default, using the current device index.
Additionally, it enhances integration with `c10::DeviceGuard`, ensuring correct device management.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
2,859,162,100
|
[pipelining] AttributeError: 'InterpreterModule' object has no attribute
|
kyoungbinkim
|
open
|
[
"oncall: distributed",
"triaged",
"module: pipelining"
] | 0
|
NONE
|
### 🐛 Describe the bug
I am currently implementing distributed training using pipelining for LLaMA 3.2.
model source code : https://github.com/pytorch/torchtune/blob/main/torchtune/models/llama3_2/_component_builders.py#L43
Below is the source code.
```
_model = llama3_2_1b()
_tokenizer = llama3_tokenizer(str(Path.joinpath(checkpoint_dir, 'tokenizer.model')))
pipe = pipeline(
module=_model,
mb_args=(example,),
)
```
Below is the Model.
```
TransformerDecoder(
(tok_embeddings): Embedding(128256, 2048)
(layers): ModuleList(
(0-15): 16 x TransformerSelfAttentionLayer(
(attn): MultiHeadAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=512, bias=False)
(v_proj): Linear(in_features=2048, out_features=512, bias=False)
(output_proj): Linear(in_features=2048, out_features=2048, bias=False)
(pos_embeddings): Llama3ScaledRoPE()
)
(mlp): FeedForward(
(w1): Linear(in_features=2048, out_features=8192, bias=False)
(w2): Linear(in_features=8192, out_features=2048, bias=False)
(w3): Linear(in_features=2048, out_features=8192, bias=False)
(activation): SiLU()
)
(sa_norm): RMSNorm()
(mlp_norm): RMSNorm()
(sa_scale): Identity()
(mlp_scale): Identity()
)
)
(norm): RMSNorm()
)
```
Below is the Error message
```
Traceback (most recent call last):
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/train.py", line 82, in <module>
pipe = pipeline(
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/.venv/lib/python3.10/site-packages/torch/distributed/pipelining/_IR.py", line 1247, in pipeline
return Pipe.from_tracing(
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/.venv/lib/python3.10/site-packages/torch/distributed/pipelining/_IR.py", line 1053, in from_tracing
pipe = Pipe._from_traced(
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/.venv/lib/python3.10/site-packages/torch/distributed/pipelining/_IR.py", line 919, in _from_traced
_sink_params(submod, inputs_to_state, [])
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/.venv/lib/python3.10/site-packages/torch/export/unflatten.py", line 1587, in _sink_params
submod_id_to_inputs_removed = _sink_params(
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/.venv/lib/python3.10/site-packages/torch/export/unflatten.py", line 1587, in _sink_params
submod_id_to_inputs_removed = _sink_params(
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/.venv/lib/python3.10/site-packages/torch/export/unflatten.py", line 1587, in _sink_params
submod_id_to_inputs_removed = _sink_params(
[Previous line repeated 1 more time]
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/.venv/lib/python3.10/site-packages/torch/export/unflatten.py", line 1653, in _sink_params
state_attr = _get_attr_via_attr_list(module, attr_path)
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/.venv/lib/python3.10/site-packages/torch/fx/graph_module.py", line 289, in _get_attr_via_attr_list
return getattr(t, field)
File "/data/workspace/kim/DeLAP/demo/pytorch/finetuning/llama3.2/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1940, in __getattr__
raise AttributeError(
AttributeError: 'InterpreterModule' object has no attribute 'cache'
```
Below is the GraphModule
```
GraphModule(
(tok_embeddings): InterpreterModule()
(layers): InterpreterModule(
(0): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(1): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(2): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(3): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(4): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(5): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(6): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(7): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(8): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(9): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(10): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(11): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(12): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(13): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(14): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
(15): InterpreterModule(
(sa_norm): InterpreterModule()
(attn): InterpreterModule(
(q_proj): InterpreterModule()
(pos_embeddings): InterpreterModule()
(k_proj): InterpreterModule()
(v_proj): InterpreterModule()
(pos_embeddings@1): InterpreterModule()
(output_proj): InterpreterModule()
)
(mlp_norm): InterpreterModule()
(mlp): InterpreterModule(
(w1): InterpreterModule()
(activation): InterpreterModule()
(w3): InterpreterModule()
(w2): InterpreterModule()
)
)
)
(norm): InterpreterModule()
(output): InterpreterModule(
(linear): InterpreterModule()
)
)
def forward(self, tok_embeddings_weight, tokens, layers_0_sa_norm_scale, layers_0_attn_q_proj_weight, layers_15_attn_pos_embeddings_cache, layers_0_attn_k_proj_weight, layers_0_attn_v_proj_weight, layers_0_attn_output_proj_weight, layers_0_mlp_norm_scale, layers_0_mlp_w1_weight, layers_0_mlp_w3_weight, layers_0_mlp_w2_weight, layers_1_sa_norm_scale, layers_1_attn_q_proj_weight, layers_1_attn_k_proj_weight, layers_1_attn_v_proj_weight, layers_1_attn_output_proj_weight, layers_1_mlp_norm_scale, layers_1_mlp_w1_weight, layers_1_mlp_w3_weight, layers_1_mlp_w2_weight, layers_2_sa_norm_scale, layers_2_attn_q_proj_weight, layers_2_attn_k_proj_weight, layers_2_attn_v_proj_weight, layers_2_attn_output_proj_weight, layers_2_mlp_norm_scale, layers_2_mlp_w1_weight, layers_2_mlp_w3_weight, layers_2_mlp_w2_weight, layers_3_sa_norm_scale, layers_3_attn_q_proj_weight, layers_3_attn_k_proj_weight, layers_3_attn_v_proj_weight, layers_3_attn_output_proj_weight, layers_3_mlp_norm_scale, layers_3_mlp_w1_weight, layers_3_mlp_w3_weight, layers_3_mlp_w2_weight, layers_4_sa_norm_scale, layers_4_attn_q_proj_weight, layers_4_attn_k_proj_weight, layers_4_attn_v_proj_weight, layers_4_attn_output_proj_weight, layers_4_mlp_norm_scale, layers_4_mlp_w1_weight, layers_4_mlp_w3_weight, layers_4_mlp_w2_weight, layers_5_sa_norm_scale, layers_5_attn_q_proj_weight, layers_5_attn_k_proj_weight, layers_5_attn_v_proj_weight, layers_5_attn_output_proj_weight, layers_5_mlp_norm_scale, layers_5_mlp_w1_weight, layers_5_mlp_w3_weight, layers_5_mlp_w2_weight, layers_6_sa_norm_scale, layers_6_attn_q_proj_weight, layers_6_attn_k_proj_weight, layers_6_attn_v_proj_weight, layers_6_attn_output_proj_weight, layers_6_mlp_norm_scale, layers_6_mlp_w1_weight, layers_6_mlp_w3_weight, layers_6_mlp_w2_weight, layers_7_sa_norm_scale, layers_7_attn_q_proj_weight, layers_7_attn_k_proj_weight, layers_7_attn_v_proj_weight, layers_7_attn_output_proj_weight, layers_7_mlp_norm_scale, layers_7_mlp_w1_weight, layers_7_mlp_w3_weight, layers_7_mlp_w2_weight, layers_8_sa_norm_scale, layers_8_attn_q_proj_weight, layers_8_attn_k_proj_weight, layers_8_attn_v_proj_weight, layers_8_attn_output_proj_weight, layers_8_mlp_norm_scale, layers_8_mlp_w1_weight, layers_8_mlp_w3_weight, layers_8_mlp_w2_weight, layers_9_sa_norm_scale, layers_9_attn_q_proj_weight, layers_9_attn_k_proj_weight, layers_9_attn_v_proj_weight, layers_9_attn_output_proj_weight, layers_9_mlp_norm_scale, layers_9_mlp_w1_weight, layers_9_mlp_w3_weight, layers_9_mlp_w2_weight, layers_10_sa_norm_scale, layers_10_attn_q_proj_weight, layers_10_attn_k_proj_weight, layers_10_attn_v_proj_weight, layers_10_attn_output_proj_weight, layers_10_mlp_norm_scale, layers_10_mlp_w1_weight, layers_10_mlp_w3_weight, layers_10_mlp_w2_weight, layers_11_sa_norm_scale, layers_11_attn_q_proj_weight, layers_11_attn_k_proj_weight, layers_11_attn_v_proj_weight, layers_11_attn_output_proj_weight, layers_11_mlp_norm_scale, layers_11_mlp_w1_weight, layers_11_mlp_w3_weight, layers_11_mlp_w2_weight, layers_12_sa_norm_scale, layers_12_attn_q_proj_weight, layers_12_attn_k_proj_weight, layers_12_attn_v_proj_weight, layers_12_attn_output_proj_weight, layers_12_mlp_norm_scale, layers_12_mlp_w1_weight, layers_12_mlp_w3_weight, layers_12_mlp_w2_weight, layers_13_sa_norm_scale, layers_13_attn_q_proj_weight, layers_13_attn_k_proj_weight, layers_13_attn_v_proj_weight, layers_13_attn_output_proj_weight, layers_13_mlp_norm_scale, layers_13_mlp_w1_weight, layers_13_mlp_w3_weight, layers_13_mlp_w2_weight, layers_14_sa_norm_scale, layers_14_attn_q_proj_weight, layers_14_attn_k_proj_weight, layers_14_attn_v_proj_weight, layers_14_attn_output_proj_weight, layers_14_mlp_norm_scale, layers_14_mlp_w1_weight, layers_14_mlp_w3_weight, layers_14_mlp_w2_weight, layers_15_sa_norm_scale, layers_15_attn_q_proj_weight, layers_15_attn_k_proj_weight, layers_15_attn_v_proj_weight, layers_15_attn_output_proj_weight, layers_15_mlp_norm_scale, layers_15_mlp_w1_weight, layers_15_mlp_w3_weight, layers_15_mlp_w2_weight, norm_scale):
tok_embeddings = self.tok_embeddings(tokens, tok_embeddings_weight); tokens = None
layers_0 = getattr(self.layers, "0")(layers_0_mlp_w2_weight, layers_0_mlp_w3_weight, layers_0_mlp_w1_weight, layers_0_mlp_norm_scale, layers_0_attn_output_proj_weight, layers_0_attn_v_proj_weight, layers_0_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_0_attn_q_proj_weight, layers_0_sa_norm_scale, tok_embeddings); layers_0_mlp_w2_weight = layers_0_mlp_w3_weight = layers_0_mlp_w1_weight = layers_0_mlp_norm_scale = layers_0_attn_output_proj_weight = layers_0_attn_v_proj_weight = layers_0_attn_k_proj_weight = layers_0_attn_q_proj_weight = layers_0_sa_norm_scale = tok_embeddings = None
layers_1 = getattr(self.layers, "1")(layers_1_mlp_w2_weight, layers_1_mlp_w3_weight, layers_1_mlp_w1_weight, layers_1_mlp_norm_scale, layers_1_attn_output_proj_weight, layers_1_attn_v_proj_weight, layers_1_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_1_attn_q_proj_weight, layers_1_sa_norm_scale, layers_0); layers_1_mlp_w2_weight = layers_1_mlp_w3_weight = layers_1_mlp_w1_weight = layers_1_mlp_norm_scale = layers_1_attn_output_proj_weight = layers_1_attn_v_proj_weight = layers_1_attn_k_proj_weight = layers_1_attn_q_proj_weight = layers_1_sa_norm_scale = layers_0 = None
layers_2 = getattr(self.layers, "2")(layers_2_mlp_w2_weight, layers_2_mlp_w3_weight, layers_2_mlp_w1_weight, layers_2_mlp_norm_scale, layers_2_attn_output_proj_weight, layers_2_attn_v_proj_weight, layers_2_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_2_attn_q_proj_weight, layers_2_sa_norm_scale, layers_1); layers_2_mlp_w2_weight = layers_2_mlp_w3_weight = layers_2_mlp_w1_weight = layers_2_mlp_norm_scale = layers_2_attn_output_proj_weight = layers_2_attn_v_proj_weight = layers_2_attn_k_proj_weight = layers_2_attn_q_proj_weight = layers_2_sa_norm_scale = layers_1 = None
layers_3 = getattr(self.layers, "3")(layers_3_mlp_w2_weight, layers_3_mlp_w3_weight, layers_3_mlp_w1_weight, layers_3_mlp_norm_scale, layers_3_attn_output_proj_weight, layers_3_attn_v_proj_weight, layers_3_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_3_attn_q_proj_weight, layers_3_sa_norm_scale, layers_2); layers_3_mlp_w2_weight = layers_3_mlp_w3_weight = layers_3_mlp_w1_weight = layers_3_mlp_norm_scale = layers_3_attn_output_proj_weight = layers_3_attn_v_proj_weight = layers_3_attn_k_proj_weight = layers_3_attn_q_proj_weight = layers_3_sa_norm_scale = layers_2 = None
layers_4 = getattr(self.layers, "4")(layers_4_mlp_w2_weight, layers_4_mlp_w3_weight, layers_4_mlp_w1_weight, layers_4_mlp_norm_scale, layers_4_attn_output_proj_weight, layers_4_attn_v_proj_weight, layers_4_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_4_attn_q_proj_weight, layers_4_sa_norm_scale, layers_3); layers_4_mlp_w2_weight = layers_4_mlp_w3_weight = layers_4_mlp_w1_weight = layers_4_mlp_norm_scale = layers_4_attn_output_proj_weight = layers_4_attn_v_proj_weight = layers_4_attn_k_proj_weight = layers_4_attn_q_proj_weight = layers_4_sa_norm_scale = layers_3 = None
layers_5 = getattr(self.layers, "5")(layers_5_mlp_w2_weight, layers_5_mlp_w3_weight, layers_5_mlp_w1_weight, layers_5_mlp_norm_scale, layers_5_attn_output_proj_weight, layers_5_attn_v_proj_weight, layers_5_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_5_attn_q_proj_weight, layers_5_sa_norm_scale, layers_4); layers_5_mlp_w2_weight = layers_5_mlp_w3_weight = layers_5_mlp_w1_weight = layers_5_mlp_norm_scale = layers_5_attn_output_proj_weight = layers_5_attn_v_proj_weight = layers_5_attn_k_proj_weight = layers_5_attn_q_proj_weight = layers_5_sa_norm_scale = layers_4 = None
layers_6 = getattr(self.layers, "6")(layers_6_mlp_w2_weight, layers_6_mlp_w3_weight, layers_6_mlp_w1_weight, layers_6_mlp_norm_scale, layers_6_attn_output_proj_weight, layers_6_attn_v_proj_weight, layers_6_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_6_attn_q_proj_weight, layers_6_sa_norm_scale, layers_5); layers_6_mlp_w2_weight = layers_6_mlp_w3_weight = layers_6_mlp_w1_weight = layers_6_mlp_norm_scale = layers_6_attn_output_proj_weight = layers_6_attn_v_proj_weight = layers_6_attn_k_proj_weight = layers_6_attn_q_proj_weight = layers_6_sa_norm_scale = layers_5 = None
layers_7 = getattr(self.layers, "7")(layers_7_mlp_w2_weight, layers_7_mlp_w3_weight, layers_7_mlp_w1_weight, layers_7_mlp_norm_scale, layers_7_attn_output_proj_weight, layers_7_attn_v_proj_weight, layers_7_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_7_attn_q_proj_weight, layers_7_sa_norm_scale, layers_6); layers_7_mlp_w2_weight = layers_7_mlp_w3_weight = layers_7_mlp_w1_weight = layers_7_mlp_norm_scale = layers_7_attn_output_proj_weight = layers_7_attn_v_proj_weight = layers_7_attn_k_proj_weight = layers_7_attn_q_proj_weight = layers_7_sa_norm_scale = layers_6 = None
layers_8 = getattr(self.layers, "8")(layers_8_mlp_w2_weight, layers_8_mlp_w3_weight, layers_8_mlp_w1_weight, layers_8_mlp_norm_scale, layers_8_attn_output_proj_weight, layers_8_attn_v_proj_weight, layers_8_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_8_attn_q_proj_weight, layers_8_sa_norm_scale, layers_7); layers_8_mlp_w2_weight = layers_8_mlp_w3_weight = layers_8_mlp_w1_weight = layers_8_mlp_norm_scale = layers_8_attn_output_proj_weight = layers_8_attn_v_proj_weight = layers_8_attn_k_proj_weight = layers_8_attn_q_proj_weight = layers_8_sa_norm_scale = layers_7 = None
layers_9 = getattr(self.layers, "9")(layers_9_mlp_w2_weight, layers_9_mlp_w3_weight, layers_9_mlp_w1_weight, layers_9_mlp_norm_scale, layers_9_attn_output_proj_weight, layers_9_attn_v_proj_weight, layers_9_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_9_attn_q_proj_weight, layers_9_sa_norm_scale, layers_8); layers_9_mlp_w2_weight = layers_9_mlp_w3_weight = layers_9_mlp_w1_weight = layers_9_mlp_norm_scale = layers_9_attn_output_proj_weight = layers_9_attn_v_proj_weight = layers_9_attn_k_proj_weight = layers_9_attn_q_proj_weight = layers_9_sa_norm_scale = layers_8 = None
layers_10 = getattr(self.layers, "10")(layers_10_mlp_w2_weight, layers_10_mlp_w3_weight, layers_10_mlp_w1_weight, layers_10_mlp_norm_scale, layers_10_attn_output_proj_weight, layers_10_attn_v_proj_weight, layers_10_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_10_attn_q_proj_weight, layers_10_sa_norm_scale, layers_9); layers_10_mlp_w2_weight = layers_10_mlp_w3_weight = layers_10_mlp_w1_weight = layers_10_mlp_norm_scale = layers_10_attn_output_proj_weight = layers_10_attn_v_proj_weight = layers_10_attn_k_proj_weight = layers_10_attn_q_proj_weight = layers_10_sa_norm_scale = layers_9 = None
layers_11 = getattr(self.layers, "11")(layers_11_mlp_w2_weight, layers_11_mlp_w3_weight, layers_11_mlp_w1_weight, layers_11_mlp_norm_scale, layers_11_attn_output_proj_weight, layers_11_attn_v_proj_weight, layers_11_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_11_attn_q_proj_weight, layers_11_sa_norm_scale, layers_10); layers_11_mlp_w2_weight = layers_11_mlp_w3_weight = layers_11_mlp_w1_weight = layers_11_mlp_norm_scale = layers_11_attn_output_proj_weight = layers_11_attn_v_proj_weight = layers_11_attn_k_proj_weight = layers_11_attn_q_proj_weight = layers_11_sa_norm_scale = layers_10 = None
layers_12 = getattr(self.layers, "12")(layers_12_mlp_w2_weight, layers_12_mlp_w3_weight, layers_12_mlp_w1_weight, layers_12_mlp_norm_scale, layers_12_attn_output_proj_weight, layers_12_attn_v_proj_weight, layers_12_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_12_attn_q_proj_weight, layers_12_sa_norm_scale, layers_11); layers_12_mlp_w2_weight = layers_12_mlp_w3_weight = layers_12_mlp_w1_weight = layers_12_mlp_norm_scale = layers_12_attn_output_proj_weight = layers_12_attn_v_proj_weight = layers_12_attn_k_proj_weight = layers_12_attn_q_proj_weight = layers_12_sa_norm_scale = layers_11 = None
layers_13 = getattr(self.layers, "13")(layers_13_mlp_w2_weight, layers_13_mlp_w3_weight, layers_13_mlp_w1_weight, layers_13_mlp_norm_scale, layers_13_attn_output_proj_weight, layers_13_attn_v_proj_weight, layers_13_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_13_attn_q_proj_weight, layers_13_sa_norm_scale, layers_12); layers_13_mlp_w2_weight = layers_13_mlp_w3_weight = layers_13_mlp_w1_weight = layers_13_mlp_norm_scale = layers_13_attn_output_proj_weight = layers_13_attn_v_proj_weight = layers_13_attn_k_proj_weight = layers_13_attn_q_proj_weight = layers_13_sa_norm_scale = layers_12 = None
layers_14 = getattr(self.layers, "14")(layers_14_mlp_w2_weight, layers_14_mlp_w3_weight, layers_14_mlp_w1_weight, layers_14_mlp_norm_scale, layers_14_attn_output_proj_weight, layers_14_attn_v_proj_weight, layers_14_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_14_attn_q_proj_weight, layers_14_sa_norm_scale, layers_13); layers_14_mlp_w2_weight = layers_14_mlp_w3_weight = layers_14_mlp_w1_weight = layers_14_mlp_norm_scale = layers_14_attn_output_proj_weight = layers_14_attn_v_proj_weight = layers_14_attn_k_proj_weight = layers_14_attn_q_proj_weight = layers_14_sa_norm_scale = layers_13 = None
layers_15 = getattr(self.layers, "15")(layers_15_mlp_w2_weight, layers_15_mlp_w3_weight, layers_15_mlp_w1_weight, layers_15_mlp_norm_scale, layers_15_attn_output_proj_weight, layers_15_attn_v_proj_weight, layers_15_attn_k_proj_weight, layers_15_attn_pos_embeddings_cache, layers_15_attn_q_proj_weight, layers_15_sa_norm_scale, layers_14); layers_15_mlp_w2_weight = layers_15_mlp_w3_weight = layers_15_mlp_w1_weight = layers_15_mlp_norm_scale = layers_15_attn_output_proj_weight = layers_15_attn_v_proj_weight = layers_15_attn_k_proj_weight = layers_15_attn_pos_embeddings_cache = layers_15_attn_q_proj_weight = layers_15_sa_norm_scale = layers_14 = None
norm = self.norm(norm_scale, layers_15); norm_scale = layers_15 = None
output_linear = self.output.linear(tok_embeddings_weight, norm); tok_embeddings_weight = norm = None
to_dtype_98 = torch.ops.aten.to.dtype(output_linear, torch.float32); output_linear = None
return to_dtype_98
```
thanks for help
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250216+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 4000
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 4
CPU max MHz: 3700.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250216+cu126
[pip3] torchao==0.9.0.dev20250217+cu126
[pip3] torchtune==0.6.0.dev20250215+cpu
[pip3] torchvision==0.22.0.dev20250216+cu126
[conda] Could not collect
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
| true
|
2,859,117,429
|
[Inductor UT][Windows][XPU] Enable Inductor UT on XPU Windows.
|
etaf
|
closed
|
[
"open source",
"Merged",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #146481
* __->__ #147347
This PR removes the restrictions on general cases for XPU on Windows, allowing us to run Inductor UT on Windows.
Additionally, this series of PRs has also fixed all XPU Inductor UT issues on Windows. However, due to resource constraints, we have not yet set up a Windows CI pipeline online.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,859,062,895
|
Update imagenet.py according to directions in #142306
|
Dmurillo722
|
closed
|
[
"open source",
"topic: not user facing"
] | 3
|
NONE
|
Fixes: Improve typing of args and kwargs with ParamSpec #142306
Description: This pull requests makes the changes specified in #142306 with regards to typing protocols in the imagenet.py file, replacing the instances of *args : Any and **kwargs: Any with typing_extensions.ParamSpec with P.args and P.kwargs. Made sure they were changed in function calls as well.
| true
|
2,859,017,208
|
[executorch hash update] update the pinned executorch hash
|
pytorchupdatebot
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned executorch hash.
| true
|
2,859,008,481
|
Support size oblivious max equation
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147344
Addresses https://github.com/pytorch/pytorch/issues/125914 by detecting when we have a sym_max between {0, 1} and a summation of size-like unbacked symints.
The basic idea is max(1, u0 + u1) can be simplified to u0 + u1 if both u0 and u1 are size-like since their value ranges are [2, inf].
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,859,002,419
|
[DCP] Cache save plans in default planner
|
saumishr
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: new features",
"topic: not user facing",
"oncall: distributed checkpointing"
] | 17
|
CONTRIBUTOR
|
Summary:
This PR caches the save plans to significantly reduce the collective cost for successive checkpoint save attempts. Here is the high level approach:
- Create the local plan and cache the same.
- In next iteration, compare the local plan with the cached plan metadata. If no change, do not send that local plan in the collective.
- Global plan step, will only create the global plan with the new delta plans and empty plans for the cached ones.
- Finish plan step will check for the empty plans. If its empty, it will grab the cached plan. If not, it will use the new plan provided.
Test Plan: UTs
Differential Revision: D69224491
## How to enable the caching:
DefaultSavePlanner introduces the enable_plan_caching which is set to False by default for now.
https://github.com/pytorch/pytorch/pull/147343/files#diff-579bbb7b82572753afa91085fbf954f7c7613ff8376da9b26153d5cc3a3c4ee8R77
Set this to True to enable the caching and we should see significant speed up in the subsequent checkpoint save attempts, specially for larger scale jobs. Reference issue: https://github.com/pytorch/pytorch/issues/123695
## Experiment results:
```
Model size: 1.6 TB post dedupe, Ranks: 256
First checkpoint save time: 280s.
Subsequent checkpoint save time: 155s.
E2e latency improvement: ~45%
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0
| true
|
2,858,974,416
|
Add no_data_dependent_graph_break mode
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 9
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147342
This adds a strict mode `TORCHDYNAMO_UNBACKED_STRICT` to prevent graph breaking when we guard on data dependent. This is a better UX for those who are actively trying to make their model more dynamic, but aren't close enough to full graph to use that flag directly.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,858,854,911
|
[NOT_FOR_COMMIT] Try Triton-cpu-arm
|
digantdesai
|
open
|
[
"Stale",
"release notes: releng",
"module: inductor",
"ciflow/inductor",
"ciflow/linux-aarch64"
] | 2
|
CONTRIBUTOR
|
Fixes #ISSUE_NUMBER
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,858,832,111
|
[codegen] enable SORT and TUPLE_REDUCTION for AMD Triton
|
chenyang78
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147340
Looks like Triton's AMD backend supports multiple inputs already.
Let's enable SORT and TUPLE_REDUCTION for it.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,858,726,376
|
Fix torch.compile Fallback for Meta Device Tensors
|
Waknis
|
open
|
[
"triaged",
"open source",
"Stale",
"topic: not user facing",
"module: inductor"
] | 4
|
NONE
|
Fixes #144607 by updating the fallback behavior in torch/__init__.py for cases when a function compiled with torch.compile is called with a tensor on the "meta" device. Instead of raising a lowering exception, the change transparently falls back to eager execution.
Additionally, this PR adds a new test (test/inductor/test_meta_compile_fallback.py) that:
• Verifies normal behavior when using CUDA tensors.
• Ensures that when a meta tensor is provided, the function correctly falls back to eager execution and returns a tensor with the expected shape and “meta” device.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,858,721,228
|
The custom metadata in fx Graph Node is not kept after `run_decompositions`
|
junpeiz
|
closed
|
[
"oncall: pt2",
"export-triaged",
"oncall: export"
] | 3
|
NONE
|
### 🐛 Describe the bug
In `run_decompositions`, only the Graph's metadata and some nodes' metadata gets preserved, but some nodes' metadata got lost.
Here is a test case to reproduce
```
def test_torch_decomposition_keep_metadata() -> None:
"""Make sure the metadata is kept after exported program run_decompositions."""
@torch.library.custom_op("mylib::add", mutates_args=())
def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: ...
@torch.library.register_fake("mylib::add")
def _(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
return torch.empty_like(x)
class TestModel(torch.nn.Module):
def forward(self, x, y):
return torch.ops.mylib.add(x, y)
model = TestModel()
x_example = torch.randn(2, 3)
y_example = torch.randn(2, 3)
exported_program = torch.export.export(model, (x_example, y_example))
for node in exported_program.graph.nodes:
node.meta["my_field"] = "dummy"
for node in exported_program.graph.nodes:
assert node.meta["my_field"] == "dummy"
decomposed_program = exported_program.run_decompositions()
for node in decomposed_program.graph.nodes:
assert node.meta["my_field"] == "dummy" # This errors out because custom metadata is lost
```
### Versions
I tried 2.5, 2.6, 2.7nightly, and non of them works.
The 2.6 did have improvements over 2.5 where the custom metadata of `output` node gets preserved, but other nodes still lost the custom metadata.
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
| true
|
2,858,692,827
|
Enable a fast path for (static) qlinear for AArch64 through ACL directly.
|
fadara01
|
closed
|
[
"module: cpu",
"triaged",
"open source",
"module: arm",
"release notes: quantization",
"release notes: releng",
"ciflow/linux-aarch64",
"arm priority"
] | 8
|
COLLABORATOR
|
This enables a fast path for eager mode statically quantized matmuls for AArch64 through Arm Compute Library (ACL) directly.
PR #145942 addressed the high overhead in `qlinear_dynamic` on AArch64 (due to redundant weight pre-transpositions and reductions) by enabling a path that calls ACL directly.
This does the same thing and addresses the same overheads for (static) `qlinear`.
I benchmarked this PR (ACL direct integration for static quantization in ATen) against the current state of PyTorch (with #147498 which updates oneDNN to v3.7 included because it's a much stronger baseline than the current oneDNN version in PyTorch which is v3.5.3). See benchmarking script below.
My benchmark runs statically quantized linears for all combinations of `M = [8, 16, ..., 512]`, `K = [768, 1024, 2048, 4096]`, `N = [768, 1024, 2048, 4096]`.
This PR gives an average speedup of **2x** for signed activations (`s8s8s8`) and **95x** for unsigned activations (`u8s8u8`) on a Neoverse-V1 with 16 threads.
The astronomical speedup for unsigned activation is because oneDNN v3.7 does not have an optimized implementation for `u8s8u8` on AArch64.
```
# SPDX-FileCopyrightText: Copyright 2025 Arm Limited and/or its affiliate <open-source-office@arm.com>
# SPDX-License-Identifier: BSD-3-Clause
import torch
import torch.nn as nn
from torch.quantization import QConfig
from torch.ao.quantization.observer import HistogramObserver, default_weight_observer
import torch
import torch.nn as nn
import numpy as np
import random
from argparse import ArgumentParser
import time
class ModelArgumentParser(ArgumentParser):
def __init__(self) -> None:
super().__init__()
self.add_argument("--M",
help="M dimension",
type=int,
default=64
)
self.add_argument("--K",
help="K dimension",
type=int,
default=64
)
self.add_argument("--N",
help="N dimension",
type=int,
default=64
)
self.add_argument("--signed_input",
help="Use (signed) torch.qint8 for inputs instead of (unsigned) torch.quint8",
action="store_true"
)
self.add_argument("--seed",
help="Random seed",
type=int,
default=42
)
self.add_argument("--iters",
help="benchmark iterations",
default=500)
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
class LinearModel(nn.Module):
def __init__(self, K, N):
super(LinearModel, self).__init__()
self.quant = torch.quantization.QuantStub()
self.fc = nn.Linear(K, N)
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.fc(x)
x = self.dequant(x)
return x
def quantize_model(model, args):
qconfig = QConfig(
activation=HistogramObserver.with_args(reduce_range=False,
dtype=torch.qint8 if args.signed_input else torch.quint8),
weight=default_weight_observer,
)
# Prepare the model for static quantization
# Specify quantization configurations
model.qconfig = qconfig
model_prepared = torch.quantization.prepare(model_fp32)
# Calibrate the model with sample inputs
# Example input data for calibration
with torch.no_grad():
sample_data = torch.randn(args.M, args.K)
model_prepared(sample_data)
# Convert the prepared model to a quantized model
model_quantized = torch.quantization.convert(model_prepared)
return model_quantized
if __name__ == "__main__":
parser = ModelArgumentParser()
args = parser.parse_args()
set_seed(args.seed)
model_fp32 = LinearModel(args.K, args.N)
model_quantized = quantize_model(model_fp32, args)
inputs = torch.randn(args.M, args.K)
times = []
with torch.no_grad():
# warmup
for _ in range(10):
model_quantized(inputs)
# benchmark
for _ in range(args.iters):
s = time.time_ns()
model_quantized(inputs)
times.append((time.time_ns() - s) / 1e6)
print("M,K,N,signed = ", args.M, args.K, args.N, args.signed_input)
print("Min Times (ms) = ", min(times))
print("Mean Times (ms) = ", np.mean(times))
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @malfet @snadampal @milpuz01
| true
|
2,858,622,441
|
Investigate FlexAttention performance degradation on low precision inputs
|
danielvegamyhre
|
open
|
[
"triaged",
"oncall: pt2",
"upstream triton",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 4
|
CONTRIBUTOR
|
Creating this issue to track my work investigating the root cause of unexpected slowdowns observed in flex attention using low precision input tensors.
## TL;DR
Current investigation seems to point to the root cause being related to a huge increase in shared memory access bank conflicts. Evidence so far points to the loading of fp8 V blocks into SRAM being the problem.
### Repro script
As a first step I wrote this repro [script](https://gist.github.com/danielvegamyhre/9aee78b63e263bad27d513f66b5dbbe4) which runs benchmarks and optionally produces traces, for bf16 and fp8 dtypes.
### Benchmark
Initial benchmarks show flex attention forward pass takes roughly ~1.39x longer using fp8 inputs versus bf16 inputs.
```bash
$ python3 profile_flex.py --fp8 --bf16
2025-02-16 21:51:55,038 - flex_bench - INFO - Running benchmark: bf16
2025-02-16 21:51:56,765 - flex_bench - INFO - bf16: 441.3840833333334 us
2025-02-16 21:51:56,772 - flex_bench - INFO - Running benchmark: fp8e4m3
2025-02-16 21:51:57,373 - flex_bench - INFO - fp8e4m3: 615.4808518518514 us
```
### Triton kernel analysis
The main difference between the triton kernels generated by inductor for "compiled_flex" and "compiled_scale_flex" is the existence of the following lines of code which implement the score mod func. Nothing here looks problematic to me.
```python
tmp0 = (qk).to(tl.float32)
tmp1 = tmp0 * tl.load(in_ptr8 + 0)
tmp2 = tmp1 * tl.load(in_ptr9 + 0)
post_mod_scores = tmp2
```
### NCU
We can use `ncu` to analyze the specific kernel which implements flex attention:
```bash
ncu --set detailed -k regex:triton_tem_.* python3 profile_flex.py --bf16
ncu --set detailed -k regex:triton_tem_.* python3 profile_flex.py --fp8
```
**Speed of light bf16**
```
triton_tem_fused_2 (8, 256, 1)x(256, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: GPU Speed Of Light Throughput
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 1.59
SM Frequency Ghz 1.24
Elapsed Cycles cycle 751,814
Memory Throughput % 43.38
DRAM Throughput % 17.69
Duration us 602.69
L1/TEX Cache Throughput % 45.31
L2 Cache Throughput % 21.36
SM Active Cycles cycle 719,559.25
Compute (SM) Throughput % 35.59
----------------------- ----------- ------------
```
**Speed of light fp8**
```
Section: GPU Speed Of Light Throughput
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 1.59
SM Frequency Ghz 1.23
Elapsed Cycles cycle 1,056,196
Memory Throughput % 72.38
DRAM Throughput % 8.70
Duration us 853.86
L1/TEX Cache Throughput % 74.56
L2 Cache Throughput % 9.74
SM Active Cycles cycle 1,022,350.08
Compute (SM) Throughput % 27.49
----------------------- ----------- ------------
```
**Uncoalesced shared memory access**
Importantly, in the NCU output for fp8 we get a warning regarding uncoalesced shared memory accesses causing a excessive wavefronts. It seems likely this is related to the observed slowdown:
```
OPT Est. Speedup: 60.51%
This kernel has uncoalesced shared accesses resulting in a total of 58720256 excessive wavefronts (63% of the
total 92856320 wavefronts). Check the L1 Wavefronts Shared Excessive table for the primary source locations.
The CUDA Best Practices Guide
(https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#shared-memory-in-matrix-multiplication-c
-ab) has an example on optimizing shared memory accesses.
```
Next I generated some profiles for bf16 and fp8 to analyze in the NCU UI:
`TORCH_LOGS="output_code" TORCH_LOGS_OUT="compile_logs/fp8_log.txt" ncu --set detailed -k regex:triton_tem_.* -o profiles/fp8-prof python3 profile_flex.py --fp8`
Here I also observed the fp8 profile has uncoalesced shared access warnings which are not present in the bf16 profile:

Diving deeper, we can see the exact line of triton code where this is occurring:

Looking at the sampling counts, we can see the majority are flagged as "short scoreboard." In the NVIDIA docs we can see this usually means this is related to bank conflicts in shared memory load/store operations.

To confirm this, I ran some metric counts to measure the number of shared memory load/store bank conflicts for bf16 vs fp8. I observed an orders of magnitude more conflicts in fp8 than bf16, for both load and store operations:
**Load and store conflicts bf16**
```
triton_tem_fused_2 (8, 256, 1)x(256, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 0
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 111,863
-------------------------------------------------------- ----------- ------------
triton_tem_fused_2 (8, 256, 1)x(256, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 0
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 104,116
-------------------------------------------------------- ----------- ------------
triton_tem_fused_2 (8, 256, 1)x(256, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 0
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 114,396
-------------------------------------------------------- ----------- ------------
triton_tem_fused_2 (8, 256, 1)x(256, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 0
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 113,613
-------------------------------------------------------- ----------- ------------
triton_tem_fused_2 (8, 256, 1)x(256, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 0
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 106,008
-------------------------------------------------------- ----------- ------------
triton_tem_fused_2 (8, 256, 1)x(256, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 0
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 102,859
-------------------------------------------------------- ----------- ------------
triton_tem_fused_2 (8, 256, 1)x(256, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 0
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 101,981
-------------------------------------------------------- ----------- ------------
triton_tem_fused_2 (8, 256, 1)x(256, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 0
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 104,583
-------------------------------------------------------- ----------- ------------
```
**Load and store conflicts fp8**
```
triton_tem_fused__to_copy_mul_2 (8, 256, 1)x(128, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 6,467
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 58,782,390
-------------------------------------------------------- ----------- ------------
triton_tem_fused__to_copy_mul_2 (8, 256, 1)x(128, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 5,698
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 58,771,364
-------------------------------------------------------- ----------- ------------
triton_tem_fused__to_copy_mul_2 (8, 256, 1)x(128, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 6,234
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 58,783,926
-------------------------------------------------------- ----------- ------------
triton_tem_fused__to_copy_mul_2 (8, 256, 1)x(128, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 5,518
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 58,800,274
-------------------------------------------------------- ----------- ------------
triton_tem_fused__to_copy_mul_2 (8, 256, 1)x(128, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 7,216
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 58,776,341
-------------------------------------------------------- ----------- ------------
triton_tem_fused__to_copy_mul_2 (8, 256, 1)x(128, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 7,586
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 58,750,044
-------------------------------------------------------- ----------- ------------
triton_tem_fused__to_copy_mul_2 (8, 256, 1)x(128, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 5,236
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 58,797,745
-------------------------------------------------------- ----------- ------------
triton_tem_fused__to_copy_mul_2 (8, 256, 1)x(128, 1, 1), Context 1, Stream 7, Device 0, CC 9.0
Section: Command line profiler metrics
-------------------------------------------------------- ----------- ------------
Metric Name Metric Unit Metric Value
-------------------------------------------------------- ----------- ------------
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum 6,156
l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum 58,800,346
-------------------------------------------------------- ----------- ------------
```
cc @chauhang @penguinwu @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
2,858,589,755
|
[inductor] GraphLowering code movement
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147335
* #147331
moved these methods under __init__ to be more idiomatic
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,858,569,390
|
[ROCm][Windows] Disable Composable Kernels and Triton for Windows builds
|
m-gallus
|
closed
|
[
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Currently, Composible Kernels and Triton aren't available on Windows. This PR ensures that the files relating to this dependency are not included during the build.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
2,858,552,264
|
add unbacked strict mode
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147333
fixes #145775
This is the first step in introducing a "strict" mode where we don't silent specialize and don't silent graph break. At a high level when we do mark_unbacked(... strict=True), anytime we specialize an unbacked symint we will explicitly error and tell the user their unbacked dimension was specialized to a single value.
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
2,858,479,858
|
UNSTABLE rocm / linux-focal-rocm6.3-py3.10 / test (default)
|
amdfaa
|
closed
|
[
"module: rocm",
"module: ci",
"unstable"
] | 2
|
CONTRIBUTOR
|
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra
| true
|
2,858,475,374
|
[inductor] Freeze runtime asserts after shape prop but before codegen
|
bobrenjc93
|
closed
|
[
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #147331
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
2,858,333,473
|
Fix typo
|
12v
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"release notes: AO frontend"
] | 6
|
CONTRIBUTOR
| null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.